Skip to content
This repository has been archived by the owner on May 22, 2023. It is now read-only.

[VM][Refactor] Move VM files to TVM runtime directory #98

Merged

Conversation

MasterJH5574
Copy link
Collaborator

This is a refactor PR. It moves some necessary files from include/tvm/relax/vm/ to include/tvm/runtime/relax_vm/, and from src/relax/vm/ to src/runtime/relax_vm/.

In order to run our VM remotely through the RPC mechanism provided by TVM and thereby enabling end-to-end auto-tuning with Meta-Schedule on different backends, we need to make the VM and the Executable part of the TVM runtime. To this end, we have two possible ways:

  • moving VM files to include/tvm/runtime/ and src/runtime/,
  • hacking CMakeLists.txt to include src/relax/vm/ as part of runtime sources and exclude the directory from compiler sources.

Per discussion with @tqchen and @YuchenJin, we agreed to move these files now. The refactor doesn't make the Relax VM runtime conflict with Relay VM runtime. Besides, the refactor doesn't seem to impact our future upstreaming. Thus I crafted this PR.

Welcome to review and leave any comment! Would love to hear opinions from the team :-)

cc @YuchenJin @tqchen @yongwww @ZihengJiang @sunggg

@MasterJH5574
Copy link
Collaborator Author

Not sure, but should I add some test showing that the runtime is good to run on the remote? If so, how should we do that 🤔

@tqchen
Copy link
Contributor

tqchen commented Mar 22, 2022

I agree that the moving as it is should be OK for now

Copy link
Collaborator

@YuchenJin YuchenJin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@YuchenJin YuchenJin merged commit 0d6bf5b into tlc-pack:relax Mar 22, 2022
@YuchenJin
Copy link
Collaborator

Merged, thanks @MasterJH5574 for the refactoring!

yongwww pushed a commit to yongwww/relax that referenced this pull request Mar 22, 2022
yongwww pushed a commit to yongwww/relax that referenced this pull request Mar 23, 2022
yongwww pushed a commit to yongwww/relax that referenced this pull request Apr 6, 2022
yongwww pushed a commit to yongwww/relax that referenced this pull request Apr 12, 2022
jinhongyii pushed a commit to jinhongyii/relax that referenced this pull request Jun 5, 2022
yongwww pushed a commit to yongwww/relax that referenced this pull request Jun 12, 2022
yongwww pushed a commit to yongwww/relax that referenced this pull request Jul 15, 2022
@MasterJH5574 MasterJH5574 deleted the relax-dev/2022-03-22-runtime-refactor branch October 17, 2022 21:50
masahi added a commit to masahi/relax that referenced this pull request Jan 17, 2023
commit 5bf9c8acf12dfba9865ac9f8480341298131dec4
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 16:10:16 2023 +0900

    clean up

commit 5506d92ed9a4c48c63f192ddcb576c9665d4ad5b
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 15:39:39 2023 +0900

    link and run compiled cutlass code, result correct

commit 81d39f84ebb1a7bcfe5c2fa9f97ce2130f932dbb
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 15:13:41 2023 +0900

    compile generated cutlass code

commit c2a68e14575c2711497347d5fc93d15b88c6c79b
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 07:47:31 2023 +0900

    codegen working

commit ba26344f85ebe43f88852c8c18b754bf03df1ce1
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 19:41:47 2023 +0900

    wip

commit ed3ac6d632a4798e411573f30d1a090bc05a96fc
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 17:53:10 2023 +0900

    wip

commit 47e09e54a0d405a14a602d7a6d31c49399c5662f
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 17:32:58 2023 +0900

    wip

commit b9e5df768b188de3dda1ef0d0f3db3fd592535d9
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 17:25:37 2023 +0900

    copy codegen_c base function

commit fe20e653ecf548f07432f06cd17395b554e6faa5
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Sat Jan 14 08:43:57 2023 +0900

    add cutlass stub

commit 990eec78b58ca259bc067bb32e4020f28d88b7c8
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Sat Jan 14 08:18:57 2023 +0900

    updated cutlass revision

commit 591a8f1ba62d9f8e923f2dcc1702e7e7590e92e2
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Sat Jan 14 08:02:01 2023 +0900

    conv2d + relu DNNL offload works

commit 1365402079626eab5bf99bad96dbfa4abd750175
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Fri Jan 13 16:35:49 2023 +0900

    starting DNNL codegen

commit 4a72e7810b0df31a4fb13856b5b6320ced4e978e
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Thu Jan 12 14:02:19 2023 +0900

    clean up

commit 61cc55e94123f3064e0d1200c70f33b4a537c4ad
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 10 16:26:31 2023 +0900

    pattern based partitioning working

commit 2433733c5458302cbe05e534d6c99bec13fb6d36
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 10 08:30:20 2023 +0900

    add conv2d match & run test

commit 360429440acb7068fdfd982d597523ebe032eb20
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Jan 9 17:20:05 2023 -0500

    [Op][O2e] Indexing and datatype operators (#338)

commit e45bdb73824d120bb3b848d4fdaa54f88211b509
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Jan 9 14:59:26 2023 -0500

    [VM] Supporting "compiled" exec mode. (#331)

    * [VM] Supporting "compiled" exec mode.

    This PR adds support of "compiled" mode to the VM.
    The compiled mode translate the relax function into TIR function
    and drive it through the TIR function.

    It is different from the micro AOT codegen, which generate TIR code
    that targets the micro C runtime environment and useful for resource
    limited settings with smaller set of features. Both leverages the
    low-level TIR build that is also shared with TensorIR.

    The current implementation targets full TVM (VM) runtime, that
    comes with PackedFunc, object, tuple, closure and all kinds of rich structure
    support. This also mean that we can leverage the full runtime support
    to handle things like allocation, dynamic shape, easy plugins and python
    interaction, which are not available in more limited runtime.

    The user directly use the same API to load the generated code regardless
    of compiled mode or bytecode. And just need to change one line

    ```python
    ex = relax.vm.build(mod, target, exec_mode="compiled")
    ```

    Most of the codegen features are lifted before the codegen phase,
    so the overall implementation would be around 500 loc for each exec mode
    and can be further cut down with future introduction of PrimValue.

    The simplicity is thanks to the TVM runtime archiecture that allows us
    to compose things together in objects. The only difference is how
    the PackedFunc of high-level driving is being provided.
    In the case of bytecode it is normal interpretation and in the
    case of compiled mode it is TIR.

    It is a complete implementation Unit-testcases are added. All codegen
    build tests are updated to include two exec_modes and have passed locally.
    The only exception that we skipped some special packedfunc handling(printing)
    because can be further simplified after we introduce PrimValue.

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>

    * Address review comments

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>

commit 32c2bf74eda5ff9cb958e6d54a29c324d53f2869
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Jan 9 13:45:14 2023 -0500

    [Op][O2d] Manipulation operators (#337)

    As tracked by #332, this PR is the O2d milestone of the high-level operator introduction plan.

    This PR introduces a few manipulation operators:
    * broadcast_to
    * concat
    * expand_dims
    * flatten
    * permute_dims
    * reshape
    * split
    * squeeze
    These operators are all well-tested.

commit b39d11a37c899a1625ecee0ffdacc5ef5444365f
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Jan 9 10:57:19 2023 -0500

    [O2h] Neural network and linear algebra operators (#343)

commit 1d6d897ec223cc07768e0382c3e21a196ffdfac8
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sun Jan 8 20:21:50 2023 -0500

    [O2g] Convolution, pooling and image operators (#341)

commit 95f784ece1d61676b88b5455be3dab5e3ddbc75a
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sun Jan 8 16:53:10 2023 -0500

    [Op][O2f] Set and searching operators (#339)

commit be1c32d817bbbbd56329378d6d929dce79ecb0f8
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jan 9 03:38:20 2023 +0800

    simple fix jupyter error reporting (#345)

commit da11e4bf373349ce4142949099e29d11655aa88b
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun Jan 8 23:09:22 2023 +0800

    [TVMScript] Symbolic shape computing (#342)

commit 80808fbf9a02480abf337b8a5edffe34c963feec
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sat Jan 7 18:31:00 2023 -0500

    [Op][O2c] Creation operators (#336)

commit 5efc8f7224f83766875e74669e139ec82119a504
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sat Jan 7 11:14:23 2023 -0500

    [TIR] Create Layout with specified axis dtype (apache/tvm#13663) (#340)

commit ae71be06c8252c211642abb9d5b3e4583bdb6f6a
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Jan 6 16:41:18 2023 -0500

    [Op][O2b] Statistical operators (#334)

commit 8220df74e339cdb6dab38a803b80edc3cd6b92e2
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Thu Jan 5 18:31:48 2023 -0500

    [Op][O1][O2a] Utility, arithmetic and comparison operators (#333)

    As tracked by #332, this PR is the kickoff part of high-level operator introduction in Relax.

    This PR is about the milestone O1 and O2a. Specifically, this PR
    * introduces some of common utility functions that the registration and StructInfo inference of each operator will often use.
    * introduces unary arithmetic operators: cos, log, negative, sigmoid, sin, sqrt, tanh.
    * refactors and introduces binary arithmetic operators: add, divide, floor_divide, multiply, subtract.
    * introduces binary comparative operators: equal, greater, greater_equal, less, less_equal, not_equal.

    These operators are well tested from three perspective:
    P1. the op getter can get correct op by name
    P2. their StructInfo inference result are as expected under all kinds of cases
    P3. Relax TVMScript parser can parse the scripts with the op inside

    For operators in O2a, most operators share almost the same StructInfo inference logic. Therefore, for tests in P2, in each category, not every op is tested in every case. For each case, it is good to have only part of op in this category tested. This is intended not to make overlarge testing file.

commit f1cab0a05f05829c4c35e2a7e613bd69f2a17fae
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Jan 5 20:43:28 2023 +0800

    [TVMScript] Ensure consistent struct info between assign lhs and rhs with sinfo annotation (#328)

    * [TVMScript] Ensure consistent struct info between assign lhs and rhs with sinfo annotation

    * fix

    * fix

commit dc7072efe290d7e8c69d8e216311510981fc82e1
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Jan 4 10:13:08 2023 -0500

    [REFACTOR] Hide VM Impl, Improve execution logic. (#326)

    * [REFACTOR] Hide VM Impl, Improve execution logic.

    This PR refactors VM by hiding most of the VM implementations
    and improve the overall execution logic.

    - Unifies PackedFunc and Closure Table.
    - Update Closure mechanism to no longer depend on string.
    - Update VMMemoryLower to VMBuiltinLower to incorporate more VM intrinsic lowering,
      move some of the codegen intrinsic to this phase.
    - Allow directly pass in function index as VM instruction.

    * Address comment

commit 2449d8c205f0b6e2c346132695b56039b07e9a10
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Tue Jan 3 22:04:16 2023 -0500

    [IR][ASTPrinter] Tweaks to AST printer's handling of struct info (#330)

commit 2d352807090ba1b7e898fbdcb83d6d9427c762cf
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue Jan 3 23:20:47 2023 +0800

    [TVMScript] Enforce `I.DeclareFunc` to have function signature (#329)

commit dcae50e836a0c2999f52d96a372fc7de584951f4
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Jan 2 15:21:49 2023 -0500

    [BACKEND] Refactor and introduce full match-cast support. (#324)

    * [BACKEND] Refactor and introduce full match-cast support.

    This PR refactors VMShapeLower to introduce full match-cast support
    that enables nested tuples, type checks at argument boundary
    and symbolic shape computation.

    Along the way we also refactors cleans up some of vm codegen logic
    and adding unit-tests for different stages.

    * address comments

commit a36920bf672d22e1d31e1e6f81d0447fd7a55806
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jan 2 23:31:04 2023 +0800

    [TVMScript] Fix empty TupleStructInfo (#327)

commit 80710a826bda66532eeda978668ed157b471b186
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Fri Dec 30 15:57:50 2022 -0500

    [CONTAINER] Hash/Equal/JSON support for ShapeTuple (#325)

    This PR add hash/equal/json support for shape tuple.

commit 343a1e7e2174612031c70ba8547577c7d21839e4
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Thu Dec 29 18:33:17 2022 -0500

    [REFACTOR] StructInfo M3: MatchShape=>MatchCast (#323)

    * Introduce match cast, and code changes along

    * add match_cast parser support (#9)

    * Match cast support for VMShapeLower CanonicalizeBinding

    * Remove `match_shape` (#12)

    * Refactor ExprVisitor/Mutator to consider Expr in StructInfo.

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>

commit e332285559d61db1c5033b8d50cd9d4af6c6b6f4
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Thu Dec 29 01:28:09 2022 -0500

    [REFACTOR] StructInfo M2: Cleanups on legacy shape related items  (#320)

    * [REFACTOR] Remove shape function

    * [WIP] Remove shape_, runtime_dep shape

    * Remove shape_ pass Compile

    * Remove RuntimeDepShape (#11)

    * BlockBuilder: remove CanProveShapeEqual, consolidate binding emit to EmitNormalize

    * Remove DimType, make get_shape_of API different from op.shape_of

    Changes the init importing to direct import so the VSCode nagivator
    can directly jump to the defintion point.

    * Apply suggestions from code review

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    * Clarify cases where struct info can be determinstically derived

    * Fix remaining testcases

    * Remove InferShape/Type per comment.

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit edadf247551f526188c0a08b3812ffc0a1f9d8bd
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 23 14:46:07 2022 -0500

    [Analysis] Optionally check structure info in well-formedness check (#321)

    With the introduction of structure info in #314, the well-formedness check will report malformed whenever an Expr doesn’t have defined structure info.

    However, when writing tests for well-formedness check and normalizer, usually we will manually construct the Exprs, which means their structure info are not defined most of the time. As a consequence, the well-formedness check will always complain “the Expr xxx doesn’t have structure info populated.” Therefore, when the checker fails to complain about the original reason of malformed, which means the checker is not working, the tests will still pass and we won’t be able to realize there is something wrong with the checker.

    Thus, in this PR we add an optional flag to the well-formedness check. In well-formedness tests, we will turn off the structure info check so that the original reason of being malformed will be revealed correctly.

    ---

    This PR also cleans up the DiagnosticContext parameter in the WellFormed API - the diag_ctx has been unused since the merge of #99.

commit d548459a1736378398ab773dce413d90d49376cf
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 23 07:33:25 2022 -0500

    [Op] Enforce int64 output shape in CallTIR (#322)

commit 10a87a455bbb84b0a0d20b22bd31784b9f4b9774
Author: Chaosfan <siriusneo@sjtu.edu.cn>
Date:   Fri Dec 23 08:03:48 2022 +0800

    [Bugfix] Handle function name properly in Relax TVMScript printer (#317)

    * remove relax_func_name_ and change logic

    * well_formed check for globalvar and gsymbol consistency

    * revise the logic in well_formed and update test

    * Remove `global_symbol` in test_function_attr.py

    * Update docs

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 29aebb9d24cbf52ab21fd98996633534301ef34d
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Dec 21 20:21:57 2022 -0500

    [REFACTOR] M1: Change parser/printer to only depend on struct info (#319)

    * [REFACTOR] StructInfo M1: Parser/printer/Var/Function to only depend on struct info field

    * Update src/relax/backend/vm/vm_shape_lower.cc

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    * Address comments

    * Allow function to have default value

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit e6173430f491c1d88d2ab77ce0ab43a8c602df30
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Dec 21 00:42:29 2022 -0500

    [REFACTOR][ARCH] Introduce StructInfo M0 (#314)

    * [IR] Introduce StructInfo

    * StructInfoFunctor and Analysis Support

    * [TVMScript] Parse type/shape annotation with StructInfo

    * remove runtime type assign

    * Remove type/shape during parsing (#2)

    * Normalizer prep: simple checks and legacy function renaming.

    * Struct info deduction in BlockBuilder.

    * Two TODOs

    * StructInfo Normalizer Fixes (#3)

    * StructInfo AST Fix

    * Fix Extern Func Deduction and shape mutator.

    * Update VoidStructInfo & globalvar (#4)

    * Fix passes and proper sinfo propagation.

    * Refactor EraseToWellDefined to Enable Remapping

    * [WIP] First stab at symbolic param tracking

    * Update EraseToWellDefined to support symbolic shape return (#5)

    * fix R.shape with ndim (#6)

    * Remove update shape/type

    * Address review comment, AnnotateTypeShape=>AnnotateStructInfo

    * Update include/tvm/script/ir_builder/relax/frame.h

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    * Address comments

    * Update printer to use structinfo (#7)

    * Update Error mechanism to prep for obj loc based reporting

    * Symbolic shape aware function call return value derivation.

    The main flow works as follows:
    - Match and populate shape_var_map and var_map by visit each pair of
      param and call arguments.
    - Call EraseToWellDefined to map the ret parameter to new result.

    * [ANALYSIS] Refactor well-form to only look at struct info.

    * Update comments according to reviews.

    * Update include/tvm/relax/struct_info.h

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Tianqi Chen <tqchen>
    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 151701740fac3a53b35799a82c85d86f91b720ee
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Fri Dec 16 17:48:26 2022 -0500

    Update relay_translator.py

commit ad0f3179a84b3bc167f91c3eb082cb996b1d04e2
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 16 17:37:00 2022 -0500

    [Translator] Remove global symbol and follow-up fix for #262 (#316)

    This PR removes the `global_symbol` linkage added by Relay Translator. It also fixes unaddressed comments of #262.

    All tests can pass locally and I believe it is safe to merge this PR directly.

commit 850deded1201001d833ac65991fb1a4c6509cb1b
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 16 16:19:48 2022 -0500

    [Translator] Support translating op calls with Tuple input (#262)

    Previously, when a Relay function contains a Call which directly uses Tuples as arguments (the example below),
    ```
    %25 = (%23, %24) /* ty=(Tensor[(1, 160), float32], Tensor[(1, 160), float32]) */;
    %26 = concatenate(%25, axis=-1) /* ty=Tensor[(1, 320), float32] */;
    ```
    our Relay-translator is unable to generate corresponding CallTIR, because the translator always assumes a argument of a Call is mapped to a single tensor (see the code snippet below: the translator directly passes the Relax variable `new_args[-1]` to function `te_tensors`, which translate a Var to a single tensor).
    https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/python/tvm/relax/testing/relay_translator.py#L124
    https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/src/relax/ir/emit_te.h#L56-L61

    But in fact, the Relax variable may correspond to a Tuple of tensors, which wasn’t taken into consideration before. And such case can lead to error in `TETensor`, when creating tensors.

    Therefore, this PR fixes the issue by examine the Relax variable before the tensor creation of Relay Call arguments. If an argument has shape Tuple and type TupleType, we break down the tuple Variable and emit a TupleGetItem for each field, and meanwhile create a tensor for each field.

commit 54a0ff551adb90937073675b4fb3d5439b814398
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Fri Dec 16 21:02:13 2022 +0800

    Remove relax parser_v1 (#313)

commit b363dd48aced8fb939880db8cf595ed65b7ecc77
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Dec 14 22:51:38 2022 -0500

    [Debugging][Arch] Expose `shape_` fields for `TupleGetItem` and `If` nodes, fix AST printer accordingly (#311)

    * Make the shape of If and TupleGetItem nodes accessible in Python

    * Remove order-dependency from AST printer tests

    * Trailing whitespace

commit 4bb01fe4eccdd59614cc264838a389b21dd40388
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Dec 14 08:11:47 2022 -0800

    [IR] Dedicated Relax Call, Constant, Tuple, TupleGetItem, If (#306)

    * relax.Constant.

    * Add callnode;

    * Tuple, tuplegetitem, If

    * mypy.

    * lint

    * rebase & fix printer.

    * rebase & remove virtual_device_

    * address comments & leave todos.

    * address comments.

    * address comments.

    * tuple index.

    * type anno.

commit 4cda8a5881fd4cd2473258b35244fc4129b6110c
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Dec 14 09:09:03 2022 -0500

    [BlockBuilder][Refactor] Normalize nested `SeqExpr`s (#310)

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 5aab150f322526c1a7bfe6cea0f4d7a7543a7f46
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Tue Dec 13 17:06:06 2022 -0500

    [ExprMutator] No prologue in VisitWithNewScope when input is SeqExpr (#305)

commit 0bf1f1b784f19298117e36016a2e522f58c143fc
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Tue Dec 13 15:27:05 2022 -0500

    [REFACTOR] Refactor BlockBuilder (#308)

commit 28d598b6a7c55f95f8f9c2ccd5c860ba5451232d
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun Dec 11 01:28:56 2022 +0800

    [Normalizer] Combine Nearby Blocks in SeqExprs (#298)

commit e152c50e368454afab75425fcb0863b1c328bf4c
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Thu Dec 8 19:33:18 2022 -0500

    [ARCH] Add VisitBinding second-level dispatcher in Expr type. (#301)

commit fed6b8fc88b824ec68260417793447dbe524c4c3
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Dec 7 16:55:40 2022 -0800

    [Linkage] Cleanup global_symbol attachment and linkage. (#300)

    * Cleanup global_symbol attachment and linkage.

    * lint

    * Add global_symbol to the main function in translation.

commit e0907d4fd03af1731310647d3d0547bdff2cfaf6
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Tue Dec 6 21:35:20 2022 -0500

    [ARCH] Introduce NestedMsg to robustly handle nested-tuple analysis (#295)

commit 2eb99975dc1b40b83db7dcbb96b748503dcb3319
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Dec 5 21:57:21 2022 +0800

    [TVMScript] Update sccript printer to enable roundtrip tests (#291)

commit f8ab9890e14c2533c401969ebf11dd591beff592
Author: Hongyi Jin <3231950289@qq.com>
Date:   Sun Nov 27 09:59:26 2022 -0500

    [RUNTIME] Correctly handling export_module when exporting modules of different type (#13489)

commit 9009840e654a9900009f7776a19e26f29b1e3f85
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Fri Dec 2 18:33:50 2022 -0500

    [Debugging] Support PackedFuncType in the AST Printer (#289)

commit bda0e42f05eaba657c40a850486e55c39924f3bf
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Fri Dec 2 18:31:39 2022 -0500

    [IR][Bugfix] Improvements to the normalizer and well-formed checker (#288)

commit d5fe87b21546995c7a88905bd04b4e944d28a0f4
Author: Yong Wu <yongcale@gmail.com>
Date:   Thu Dec 1 20:00:38 2022 -0800

    Enforce i64 index in ShapeExpr (#281)

commit 9c9eb5585501a5da0f25ca38d7d3ac8269b6714c
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Dec 1 11:00:47 2022 -0800

    [Parser] Register memory operators to new parser. (#279)

commit 28c3f68cc51d2c22936c5496debcb8c2de54040b
Author: Yong Wu <yongcale@gmail.com>
Date:   Thu Dec 1 08:55:31 2022 -0800

    [TVMScript] enable the closure test (#280)

    * [TVMScript] enable the closure tests.

commit eb9d531b2565cdd000f46e5ecae2c45b9f589abe
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Dec 1 05:47:05 2022 -0800

    [Normalizer] Enforce all Expr have checked_type_ invariance after normalization. (#287)

commit 43f81ddf4afc2f4fdb214c9f994e844f53126cdb
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Mon Nov 21 19:25:43 2022 -0500

    [Debugging][Bugfix] Debug printer improvements: Print `shape_` and `checked_type_` for all nodes and handle non-binding `MatchShape`s (#261)

    The initial AST printer only included the `shape_` and `checked_type_` fields for variables because of the potential for infinite recursion (`shape_` nodes can contain other expressions, which in turn have `shape_` nodes). This PR cuts off the potential recursion to allow for printing these fields for all Relax expressions, which should be more useful for debugging.

    This PR also fixes a bug: The AST printer previously did not handle `MatchShape` bindings that did not bind a new variable.

commit 304048c33956dddb5027fec26541d57f903d8ca2
Author: YuchenJin <yuchenj@cs.washington.edu>
Date:   Thu Nov 17 17:02:11 2022 -0800

    Fix after rebase, and reorganize the TVMScript folder structure.

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>

commit e7277460f0a2c7c980be9323cdf7919dc38153e2
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Nov 17 00:31:32 2022 +0800

    [TVMScript] Switch to the new parser (#276)

    * [TVMScript] Support cross-function call for relax function

    This PR adds support for cross-function call for relax function, by declaring a function signature (i.e. an empty function that contains params and return type/shape but w/o body.)

    However, the PR meets the issue of block_builder shape deduction, which does not use function `ret_shape` to infer the shape of GlobalVar Calls.

commit 7152175762613130e3ba647c77cc9818312a5b06
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Nov 5 16:45:33 2022 -0500

    [CI] Enable Mypy type checking for Relax; Fix typing errors to pass Mypy checking. (#270)

commit 6f8f6da505b835345d7709d06bdfd8dddce7e85b
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Nov 3 08:16:35 2022 -0700

    Introduce memory primitives (#255)

    Introduce the memory primitives, including `relax.memory.{alloc_storage, alloc_tensor, kill_storage, kill_tensor}`.

commit 48b7c158cc01532f9019a2e615f2d94766a9464c
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Oct 20 08:30:47 2022 +0800

    [TVMScript] Update Type Annotation Behavior of the Parser (#269)

    This commit changes the behavior of the parser to allow type annotations, as suggested by the community.
    The current behavior:
    - Use the more refined type/shape between user annotated and deduced type/shape.
    The updated behavior:
    - Always use user annotations
    - Only checks if the type/shape is valid.

commit 5c3079bb6e1e4eeb4dc2d9b740facb2686c67519
Author: sung <sunggg@umich.edu>
Date:   Mon Oct 17 19:07:01 2022 -0700

    Reenable autotvm silencer; fix e2e_auto_tir.py; fix lint.

    Co-authored-by: YuchenJin <yuchenj@cs.washington.edu>

commit 85b81292626ab6f23caf2b61095a6f957b61b21c
Author: sung <sunggg@umich.edu>
Date:   Mon Oct 17 18:09:34 2022 -0700

    Recover: [Bugfix] Couple of bug fixes to run TVM-gen code together with BYOC (#249)

commit c46ae8566582f1fcd8fcda1479943d3abb95b3b0
Author: sung <sunggg@umich.edu>
Date:   Mon Oct 17 17:16:01 2022 -0700

    Recover: [Pass] Separate ApplyHistoryBest from tuning passes (#226)

commit 83bc7cb144643d5823bf06220186528923835667
Author: Junru Shao <junrushao1994@gmail.com>
Date:   Sun Oct 16 22:52:56 2022 -0700

    Enable Hexagon tests

commit f9f4f7904ec5468a725b2ba924a619a7c5ed4e43
Author: Junru Shao <junrushao1994@gmail.com>
Date:   Sat Oct 15 15:25:56 2022 -0700

    Recover dropped commits

    [TVMScript] B4: If branch support (#263)
    B8: Local Function Support  (#258)
    [TVMScript] B3: Type annotation checks (#256)
    [TVMScript][Parser] B1: Dataflow block (#252)
    [TVMScript] B2: match shape support (#251)
    [TVMScript] B6/B7: Symbolic shape and var shadowing  (#245)
    [TVMScript] B5: Support relax op (#244)
    [TVMScript] B0: Call_tir support (#243)
    enhance parser error reporting (#242)
    [TVMScript] A1: Relax Parser infra (#240)
    update ci image versions. (#241)
    [TVMScript] B2-4: TIR IRBuilder (#239)
    [TVMScript] A0: Relax IRBuilder infra (#235)
    [TVMScript] B5-6: TIR IRBuilder (#231)
    [TVMScript] B1: IRBuilder (#228)
    [TVMScript] New Parser: Part C (#218)
    [TVMScript] New Parser: Part A (#221)
    [TVMScript] New Parser: Part B (#217)

    Not recovered:
    [Pass] Separate ApplyHistoryBest from tuning passes (#226)
    [Bugfix] Couple of bug fixes to run TVM-gen code together with BYOC (#249)

    co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
    co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 65a53034bc0bee9877a1bdf363c2eadcde35f226
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Thu Oct 13 23:06:55 2022 -0400

    [Op][Debugging] Add `assert` operator (#260)

    It was brought up that Relay lacks an assert operator, so we may as well have one in Relax for debugging. One issue is that we can't name it "`assert`" because Python will treat it as a syntax error to have it as a field name for the "`relax`" module, i.e., `relax.assert` is a syntax error. Thus the op is named "`assert_op`," which is not ideal but serves its purpose.

commit 71d96e6c0a314936fa49fd7bc1ea79069027ab12
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Oct 12 05:07:33 2022 -0700

    [Pass] Support Function and If in Normalize pass. (#268)

    * Support Function and If in Normalize pass.

    * Use structural equality for expr_memo_.

    * Change back to pointer equality for expr_memo_; Add more tests.

    * rebase.

commit 312a344cdeec66b1330a80d34ca78556fb338e7c
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Tue Oct 11 18:25:29 2022 -0400

    [Analysis] Expose analyses related to vars in Python (#265)

    Previously, analyses to gather up all variables, free variables, bound variables, all global variables, and all global variables that are called had been implemented in C++ but had not been exposed in Python or tested. This PR exposes these analyses and adds tests for them.

    Two further changes:
    * The analyses previously ignored variables bound in `MatchShape` nodes; these are now treated as bindings too.
    * `rec_global_vars` is renamed `called_global_vars`, since the analysis itself does not check recursion.

commit 132702be7e7ed0256045d7a405e532c3d5beef6d
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Mon Oct 10 18:19:38 2022 -0400

    [Expr] Allow annotating return shape on function nodes (#253)

    This PR adds a `ret_shape` field for specifying the shape of the function's return value. At present, we will not use this information, but by adding it into the AST, we will be able to parse the return shape and use it in the future. Parser V1 in this PR will just always list the `ret_shape` as `RuntimeDepShape`.

commit 7276c9e2ee13a4754775491ca36a7aae2d55b827
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Sat Sep 24 00:11:45 2022 -0400

    [Bugfix][VM] Properly convert tensor inputs in `save_function` (#257)

    It was observed that closures saved using `save_function` would crash when used over RPC with the `time_evaluator`, whereas using `set_input` and `invoke_stateful` worked as normal. While I am not entirely sure why these failures happened over RPC only in `time_evaluator` (but not in other RPC trials), it became clear that `set_input` performs a conversion of input tensor values in `SetInputTensorWithIndex`, while `save_function` was not doing this. Adding this conversion fixed the observed bug.

commit 7183c7ffbe896dd9b5f5742b62afe9c821dae682
Author: Josh Fromm <jwfromm@octoml.ai>
Date:   Wed Sep 21 17:07:08 2022 -0700

    [Call TIR] Fix bug when invoking call_tir with scalar values. (#254)

    This small PR changes a check in the tvmscript parser to support empty shape tuples which are used to represent scalars. I added a scalar addition test to make sure it works properly.

commit 605ba8d1548efb90980f9b18ea94f1d53f9ec3ec
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Sep 14 17:27:03 2022 -0400

    [Bugfix][Op] Register attributes for unique and print (#248)

    Attempting to use `dump_ast` on functions containing the operators `relax.unique` and `relax.print` previously crashed due to being unable to query their attributes' keys. It turned out that this was a problem with the operator attributes: They had not been registered on the Python side, so Python representation treated them as opaque TVM objects. This PR corrects this mistake.

commit f4525dd8a3e61f572b50107555cef4b469c971f4
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Sep 14 17:24:40 2022 -0400

    [VM][Benchmarking] Add option for saving e2e results as CSV file (#247)

    This PR makes some small additions to the end-to-end AutoTIR script, namely eliminating a bug (it was incorrectly using the stateful API) and adding an option to save the test results as a CSV file for benchmarking purposes (the data can then be separately analyzed as needed).

    These changes also required a small extension to the save_function method in the VM, namely allowing it to take keyword arguments.

commit f1ee4b6cd2c3ee0596cef6f5b7ff7e715fb4ae0d
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Wed Sep 14 17:23:29 2022 -0400

    [BugFix] Enable emit global MatchShape (#246)

    Fix an incorrect check which disables emitting global MatchShape outside a dataflow block and mistakenly enables emitting dataflow MatchShape outside a dataflow block.

commit 0a7a0a9daf5f1a2fa06ee6cd6169a28d397821fa
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Thu Sep 8 09:49:05 2022 -0400

    [Pass] Canonicalizing Bindings (#233)

    It may be useful for some passes to collapse chains of definitions, particularly after other compiler transformations that may reduce or simplify some expressions.

    This pass will take chains of definitions and replace references to later definitions to the original one. It works by checking `LookupBinding` for each var use-site and replacing the var with its definition if the definition was another var. (Note: This required updating `BlockBuilder` to also update its binding map for `MatchShape` nodes; that was arguably a bug.) Additionally, `MatchShape` bindings where the `LHS` and the `RHS` are guaranteed to match at compile time are canonicalized into ordinary `VarBinding`s.

commit 7a6f91f7d4077eebf926aa1f19281404494b9362
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu Sep 1 07:02:57 2022 -0400

    [Hexgaon] Use uploaded path to load module. (#238)

    * Fixes a bug to use the uploaded file remote path for loading the module
    remotely.

    * Modifies the task_python_hexagon.sh script to only run passing test
    on device. This is used by Jenkins CI.

commit e50290140c204ae091e335b797a07f2f6567a163
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Aug 18 21:51:35 2022 -0700

    [Pass] New Python ExprVisitor/ExprMutator! (#190)

    Add decorators `visitor` and `mutator` to help users create `ExprVisitor` and `ExprMutator` in Python. Users can customize visit/rewrite/post-order-rewrite function in Python.  `PyExprVisitor` and `PyExprMutator` lists the functions users can customize.

commit 7313855476cc522bf3e8bdbe7a60b82cd725fe4c
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Thu Aug 18 15:20:06 2022 -0400

    [BugFix] Expose `relax.expr.Constant` to `relax.Constant` (#230)

commit cdfd4e939f2d1e88c560a05d83ddf2f7afe70304
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Aug 18 02:25:13 2022 +0800

    [FIX] Fix windows build issue when allocating a dynamic array (#219)

    In the current codebase, kNumArgs is a runtime-dependent variable (i.e. its value depends on the input shape of Array).

    Allocating arrays with runtime values is not allowed during building on Windows (I'm surprised it can be compiled on Linux and macOS)

commit 887762cd97686ae23a61609ca9ffc8d6a2c5178b
Author: Yong Wu <yongcale@gmail.com>
Date:   Mon Aug 15 08:00:31 2022 +0800

    Update with rebase

commit 5a23346bc437043b48866411e39dfcf066edda59
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sun Aug 14 14:44:12 2022 -0700

    [Bugfix][VM] Fix var binding to a ConstantNode; Force VM if.cond register to take an NDArray instead of POD. (#216)

    Fix the bug in #212. The cause of this bug is VM Codegen did not handle binding ConstantNode to variable (`x = relax.const([1, 2])`) and save the constant NDArray to the register. Previously the codegen only handles the case where ConstantNode as CallNode's arguments. Now it's fixed and unit test is added.

    Fix the bug in https://github.com/tlc-pack/relax/issues/214#issuecomment-1211411432, the issue was caused by the VM simply read the condition register of the If instruction, and expect it to be a POD int or bool. https://github.com/tlc-pack/relax/commit/811e877c289fa52f55886c8a3e8dce10ed84915f adds a `LoadScalarInt` function similar to the Relay VM to check the If.cond register stores an NDArray, and cast it to int_64. Since we haven't introduced PrimValue and PrimType (that represents POD values like int and bool) to the Relax language yet, let's enforce `If->cond` to be a Tensor (NDArray at runtime).

commit 6c9d403503297a0d0e28318bafcba9fc9c99ae42
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Fri Aug 12 13:53:28 2022 -0400

    [VM][UX] Allow for saving closures to avoid extra dictionary lookups in timing trials (#208)

    This PR implements a function that allows for saving a `PackedFunc` in the VM's module that just calls an existing function with a specific set of arguments to address #179 and #178. The main use of this is for timing, to avoid some overhead in looking up functions.

commit e172b40af31dc3384adbcf6e7b0bce7f31ce41ea
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Thu Aug 11 19:55:57 2022 -0500

    [Pass][UX] Statement rewriter for DataflowBlock (#210)

    - Implements a few APIs to quickly perform statement-level mutation: `add`/`remove_unused`/`remove_all_unused`/`replace_all_uses`.
    - Implemented `remove_all_unused` to remove dead statements inside `DataflowBlock` cc: @psrivas2
    - Address minor issues (unnecessary headers and bad docstrings) in https://github.com/tlc-pack/relax/pull/163

commit 37791e0a5d4a495365fd647f2cecbed16f3a3785
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Thu Aug 11 13:50:56 2022 -0500

    Clean warning messages by Clang and Pylint (#215)

    * refact: clean clang warning in relax

    * refact: fix pylint

    * fix cpplint and clangd suggestions

    * fix: no cpplint on virtual-override

commit 0b00715dc634aa7f091e942a54a29ee9c802ccf9
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Aug 10 11:47:37 2022 -0400

    [VM][UX] Implement stateful API (#207)

    This PR implements the stateful API discussed in https://github.com/tlc-pack/relax/issues/179. It ensures that if you use `set_input` to set inputs, you must use `invoke_stateful` to run the function (otherwise failing) and must obtain the results using `get_output`. It handles nested tuple returns.

commit ed7b77e040654582d1ab1b9535ebbc4da77da243
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Tue Aug 9 17:07:52 2022 -0400

    [Op][Debugging] Add a print operator (#201)

    * Attempt at adding a print operator

    * Fix the registration

    * Actually use the format string

    * Improve test

    * Fix comment placement

    * Improve the docstring for relax_print

    * Handle tuples too

    * Formatting :(

    * Correct commit message

    * Match attr name across Python and C++

    * Make print variadic

commit a9bd3053c1106d1926fce1dc5787fc8be27f3985
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Fri Aug 5 11:45:03 2022 -0400

    [Pass] Implement legacy lowering pass that leverages relay op strategy (#189)

    This PR implements Relax Op lowering that leverages existing Relay Op Strategy (legacy).
    As ops like conv2d, matmul are relay-, relax- independent, this pass assumes that we can always find relay op equivalents for such relax ops and use their info to leverage the relay op strategy.

commit 1a1bcf75d97b2e7e4f758b6cd08bd747b222ef36
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Thu Aug 4 17:56:17 2022 -0400

    [Pass] Introduce metaschedule as a tuning pass (#188)

    This PR delivers MetaSchedule tuning as a tuning passes.
    We can either tune at IRModule level with relax.transform.MetaScheduleTuneIRMod or tune at primfunc level with relax.transform.MetaScheduleTuneTIR.

commit 7144654633477ea0d2bff300ba753dc8bfdeae4d
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Thu Aug 4 14:34:10 2022 -0400

    [Example][UX] Make the RPC timeout configurable in the `e2e_auto_tir` example (#186)

    Running the e2e_auto_tir example over RPC can run into issues due to timeouts because some models can take a long time to run on some machines. This PR makes the RPC timeout configurable to more easily address these issues.

commit 81e565e5df90cfe12d22deb7b26845ea3aa13526
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Aug 3 19:38:21 2022 -0400

    Fix BlockBuilder Scope Recovery in Misuse (#199)

    This happens in interactive usecases. When function scope
    exit triggers an error, we need to recovery the
    BlockBuilder.current properly so users can try again.

commit 21b1e7dc35dc838214cd4b6f26fbc31492323b02
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Aug 3 19:09:21 2022 -0400

    [Testing][AST] Add a simple AST printer for debugging (#198)

    * Add ast printer

    * Print seq expr body

    * Match annotation field names to real AST

    * Handle call attrs and func ret types

    * Add more advanced test cases

commit 89f55c8167a80b4b9c8751309b5db648fb4db047
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Wed Aug 3 09:59:47 2022 -0500

    [UX] Adopt changes from tvm-main and render code with IPython.display (#192)

    Render code with IPython.display.HTML if possible to fix the ansi-escape 24-bit rendering issue in Colab.

commit 0b52b558eb14b3f113a4b543c8f0a824baaa58bc
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Mon Aug 1 11:59:24 2022 -0500

    Dataflow Pattern Lang: Core Matching Features (#163)

    The structure is similar to the Relay's pattern matcher (https://github.com/apache/tvm/pull/5231). The main difference is that those pattern types are adopted to be relax-compatible. Relay pattern types, some less used patterns (IfPattern) and df-topological patterns (DominatorPattern) are ignored (some of them will be brought later).

    The implementation splits patterns into two parts:
    - **Match an Expression**: match an expression syntactically (`MatchExprPattern`, i.e., `DFPatternMatcher`);
    - **Match a Graph**: match a graph (cross multiple `VarBinding`) topologically (`MatchGraphPattern`);

commit 74371634e9a011e63650b734aba20546b016c524
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Tue Jul 26 20:06:25 2022 -0500

    [UX] Highlight TVMScript with Pygments (#185)

commit 15e54ef215950944ffd74858c12c30aabcb0dcce
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sat Jul 23 11:22:13 2022 +0800

    [Pass] Enhance BindParams to take numpy dict as input (#184)

commit cf2e3b97110c805597059c5ba8303a653417e080
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Mon Jul 18 21:45:21 2022 -0400

    [Bugfix][VM] Ensure set_input works over RPC by not returning an array of argument names (#183)

    Currently, attempting to use the VM's `set_input` method will fail over RPC because `set_input` calls `get_func_param_names`, which returns an array of parameter names. RPC does not support sending arrays. This PR corrects this issue by instead having `set_input` query the function arity and then query the argument names one by one, which is the approach taken by the Relay VM (accordingly, the names for the functions used to do this, `get_function_arity` and `get_function_param_name`, are taken from the Relay VM).

    This PR also adds a unit test over RPC on localhost.

commit b0e57dbc0862499c3f2a7d91858354c41fcf5e95
Author: Yong Wu <yongcale@gmail.com>
Date:   Fri Jul 15 11:50:29 2022 -0700

    Fix after rebase

commit 3494b7a47bf0f7c3219538b2e9064b825cf3258c
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Mon Jul 18 00:38:41 2022 -0400

    [Pass Infra] Tuning API serialization and database support (#168)

    * refactor tuning API to support serialization of Choice, Knob, Trace

    * Implement tuning api JSON database

    * Add comments

    * fix pylint

    * fix cpplint

    * reflect feedback

    * add minor comment for the future work

commit 777549a6037cc97b698f53ed629cf65c33ae7eca
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jul 18 00:05:14 2022 +0800

    [Fix] fix windows build issue (#182)

    TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS is needed when we have a default-like constructor (e.g. (Span span = Span()))

commit b81e6a9838f92ba412a0bd4951a46cc61a43a22d
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jul 18 00:04:03 2022 +0800

    fix print twice issue (#181)

commit d4cc79ed664bbe34a4d9dab2923cd5a7a7c5b52c
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Jul 14 09:15:44 2022 -0700

    [Pass] Python ExprMutatorBase/ExprMutator (#172)

    - Rewrite ExprFunctor in Python. New ExprMutatorBase and ExprMutator in Python.
    - Implement demo passes: RewriteFMA and FuseFMA with Python ExprMutator.
    - Expose some functions to ffi in block_builder.py

commit 01cdc4d43258b1fb9dcc630f05f38f792e3bc513
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Tue Jul 12 19:25:51 2022 -0400

    [VM] Deprecate API to save/load executable to file (#176)

    Executable `save_to_file` and `load_exec_from_file` API was used to
    save/load just the executable to/from file. This was confusing as it did
    not export the TensorIR kernels in the Relax Module, thus leading to
    bugs such as https://github.com/tlc-pack/relax/issues/175.
    Moreover, the API was only used in some tests, and not useful for end
    user.

    Deprecating this API to have a single uniform way of
    serializing/deserializing TVM IRModule using `export_library` and
    `tvm.runtime.load_module` API.

commit 74b3d67e8ae74aed3446a5ae5a05b8f5586e2c3b
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Fri Jul 1 09:31:30 2022 -0700

    [Refactor] Generic dispatching for `IsBaseOf`; Simplify Type/Expr initializations; `relax` -> `R` in printer; Disallow local function in VMCodegen (#171)

    - Generic dispatching for `IsBaseOf`: `IsBaseOf` uses a bunch of if-else to check if the subtype relation between the base type and derived type, now it's changed to use a generic TypeFunctor to dispatch on the base class to do the check.
    - Simplify Type/Expr initializations: We had to write `RuntimeDepShape(Span()`), `ObjectType(Span())` to initialize several Types and Exprs, this is due to the `TVM_DEFINE_OBJECT_REF_METHODS` macro that sets the constructor with `= default`. By changing to use `TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS`, we can now just write `RuntimeDepShape()` without specifying an empty span.
    - `relax` -> `R` in printer: Change to print `R` rather than `relax` in TVMScript as the default behavior. This is consistent with our test cases and TIR convention: using `T` as shorthand.
    - Disallow generating code for local function in VMCodegen: these local functions should have been lifted in the lambda lifting pass before codegen.

commit 8fdc3ba3eae0d1ffc535e240be251aaae5546eb8
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu Jun 30 15:14:40 2022 -0700

    [Parser] Enable R.parser.pretty_print to print TIR PrimFunc (#174)

    This way we can have a uniform API to print IRModule, TensorIR
    function and Relax functions.

commit ed0414540c9fbc063aa727cfc71bdee51a4bafdd
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Wed Jun 29 08:20:17 2022 -0700

    Update tests to use `set_input` for rpc calls. (#173)

    Fix relax-hexagon tests to use set_input api, which is the correct way to invoke a function over RPC.

commit 1f962bda7a79d13fee1a4f9f4ad3ddde4f5467b2
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Tue Jun 28 20:49:33 2022 -0400

    [BYOC][PASS] Prototype implementation of modular compilation w/ TensorRT (#164)

    This PR delivers the prototype of the followings:
    - Relax BYOC JSON codegen
    - Relax BYOC TensorRT codegen
    - Extension in Relax VM to support external modules
    - `RunCodegen` pass: run codegen for the annotated relax functions
       - Annotation (dispatch decision) will be done by earlier passes  e.g., greedy heuristic, Collage
       - The generated runtime module and Codegen itself should be tvm object
    - Misc minor code improvement for other passes

commit f25fe0c80670272582db3aa791901c7fa49fc59e
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Tue Jun 28 12:47:07 2022 -0700

    Run static/dynamic models over Hexagon using Relax VM RPC (#167)

    * Move Relax VM builtins to src/runtime.

    * This fixes a bug we encountered while loading the module for hexagon.
    Since it was building the minimal runtime it was missing definition
    of Relax VM builtins.

    * Mark Hexagon module as DSO exportable.

    * Load Relax VM Executable over RPC

    * Support allocation for shape heap on device

    Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>

commit 25174be634b5e04f0468b48bd477f22b17e75f84
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Fri Jun 24 13:33:04 2022 -0700

    [CI] Enable Hexagon CI in Jenkins. (#169)

    Running all Hexagon tests in simulator is very slow. So we only run
    Relax related hexagon tests `test_relax_integration.py`.
    This test file is empty right now and it would be
    populated as we push relax-hexagon related changes.

commit 225aecdb5d7d33f2af048f3aef9c9a6ac758f4fd
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Jun 23 09:47:30 2022 -0700

    [VM] Add set_input interface; Fix e2e tuning script. (#166)

    * Add set_input interface.

    * Address comments.

commit 29a707cbd9be6e02dd8a3cd1961cfb53057eb51b
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Jun 16 09:07:45 2022 -0700

    WellFormed Instrument (#165)

    * add conftest for test/python/relax

    * [Wellformed Check]: allow TupleType as Function parameters

    * move WellFromedInstrument to relax.ir.instrument

    * add header

commit b4c3c4bb65b09db7c9b3ec114d6680d14f306d37
Author: Yong Wu <yongcale@gmail.com>
Date:   Sat Jun 11 23:26:17 2022 -0700

    Update after rebase

commit 3c0e3c0ee08c78b17cc1ba0429727c199737403a
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Jun 11 18:42:29 2022 -0700

    [Relay translator] Allow replacing default topi function with user-provided TIR PrimFunc. (#159)

    * Add replace_op_with_tir to translator.

    * came up with a better name

    * better doc.

commit f250f93eed886dc2c3a1cb1f8a4ab2077c57080e
Author: Yong Wu <yongcale@gmail.com>
Date:   Sat Jun 11 15:20:21 2022 -0700

    [Pass] Lambda Lifting (#99)

commit b55fd31d4e11373b30a93f88412a3d6e2d21d3c1
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue Jun 7 10:07:17 2022 +0800

    [E2E] End-to-End tuning e2e_script (#153)

    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>

commit d3f94e73ec7b9c9ac7b3675f962e9030e55fa603
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu Jun 2 08:19:18 2022 -0700

    Fix shape lowering pass bug for non i64 dims. (#152)

    Prior to this change, VM Shape Lowering pass did not cast integer values
    to shape heap dtype (i64) which resulted in incorrect values when read
    from heap later. This PR adds a cast to i64 for such values.
    This also adds well-formed check to ensure shape dimensions are of
    integer types.

commit 9cf777f48069d598eda276be0b9aabaf301acf0f
Author: Yong Wu <yongcale@gmail.com>
Date:   Wed Jun 1 17:52:40 2022 -0700

    [Parser] Add FuncType support (#154)

    * [Parser] Add FuncType support

    * Address comments

commit f99121d506df45870cd026e052f5b3c41d4bd982
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Wed Jun 1 09:01:40 2022 -0700

    [PASS] Remove Unused Functions in IRModule (#151)

commit a718e9f9e073ca0ea1790562254c09aaa863eaa4
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Tue May 31 15:15:28 2022 -0700

    [Pass Infra] Tuning Pass API (#144)

commit a485b7bdb45f8379daa45e8c923a47fd6871cbdf
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Sun May 29 12:51:07 2022 -0400

    [REFACTOR] Move TIR op kind analysis to relax as it is relax oriented (#155)

    This also keep TIR mostly independent from higher-level IR.

commit abd20bdc9b87aa53e0c27e8c5c3fc195be5e8c91
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun May 29 23:31:05 2022 +0800

    add test cases for FuseTIR (#156)

commit de42ec3d5ae0f0304060460764619a5a16995a33
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu May 26 22:14:51 2022 +0800

    [Pass] Relax Transform FuseTIR (#150)

    * [Pass] Relax Transform FuseTIR

    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>

commit 153d0cc8f2d39b23e63fcd6feaf9755a0eaf8c28
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed May 25 15:44:59 2022 -0700

    [Mutator] Separate unnormalized-form and normal-form mutators (#148)

commit dfa42c09a3087605e805526ab7db7b49d6752ca5
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Fri May 20 16:30:18 2022 -0700

    Print/parse tir cast/max operations in Relax shape (#149)

    tir.cast and tir.max are commonly used operators in shape expression in
    Relax. These two operators often show up when importing Relay module
    with `Any` dims to Relax module.

commit c7186fd44ad5865d84ac61fc2981a15c8af9be4c
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu May 19 18:29:12 2022 -0700

    Add support to import relay models with Any dim. (#146)

    Converts Relay Any dimension to symbolic dim in Relax.

commit ef9cf6baba1c2f7215746459ad5a9193df6572c9
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Tue May 17 07:55:56 2022 -0700

    Refactor shape lowering pass and Blockbuilder. (#145)

commit 230def2284c21eaff520e58fa96a80313b6a7c8f
Author: Yong Wu <yongcale@gmail.com>
Date:   Fri May 13 14:30:05 2022 -0700

    Support Closure (#140)

commit 0e998988aabdeb8d913e2889eb5a9d72bee35ca2
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu May 12 17:13:15 2022 -0700

    [Analysis] IRModule well-formed check (#142)

commit 1bd4e685ffcc0c4b677af47ecc8609dbfacdfd9d
Author: Yong Wu <yongcale@gmail.com>
Date:   Wed May 11 09:31:13 2022 -0700

    Change after rebase

commit d0ad35b375449c7e067a1edada7502557a03dd26
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue May 10 08:44:22 2022 +0800

    FuseOps for relax (#141)

    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>

commit ae7b5b79c40498203842b6c9193e91bcc1937bea
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Wed May 4 20:52:16 2022 -0700

    Add `relax.unique` operator in Relax. (#135)

    * Add Unique operator in Relax.

    This adds the functionality to register a packed function implementation of
    any operator using `FCallPacked` attribute. The relax operator would be
    lowered to a call to the registered packed function during codegen.
    For example, in this change relax.unique is lowered to
    `relax.run.unique` packed function which uses torch.unique under the
    hood.

    * Add support for integer constants in Relax VM.

    This adds serialization, deserialization, and print support for
    integer constants.

commit 1ca18611ae59ab4d1667066ed9921690d2a5611c
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue May 3 09:34:55 2022 +0800

    Add ShapeType to ShapeExpr.checked_type during construction (#139)

commit 6481d533ed259a080dede704f7443c4a2221a842
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Mon May 2 16:26:08 2022 -0700

    Introduce Relax function attribute and drop name field in Relax function (#136)

commit d735ebd719d89c804691b29ee0d881c785384fc6
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Apr 30 18:45:14 2022 -0700

    [BlockBuilder] Sub function call shape deduction: constant shape case. (#137)

commit 10f8e56cbcb27beb373075e3c6e3a9728ffb5eb2
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Apr 28 16:59:38 2022 -0700

    [AST][Type] Introduce ObjectType; Infer the type of call_packed by type_args; Refactor InferType/InferShape. (#132)

commit 7e2038a8b662659dd6ba2e2a86bedbc6c3891bfa
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Apr 25 17:20:19 2022 -0700

    [AST][BlockBuilder] Normalize relax.Function; Refactor BlockBuilder to take optional input IRModule. (#133)

commit f1eca6d74365c6b0665b64c86ececce86fd76df3
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Sun Apr 24 07:09:11 2022 -0700

    [Printer][Parser] Modify Tensor annotation printing and parsing. (#128)

commit 296876eaf1246ea7948c69d2111cfea2ca51ca0c
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Fri Apr 22 08:05:13 2022 -0700

    [Pass] Python pass decorator and ExprFunctor (#126)

    * Relax ExprFunctor in Python

    * fix the register bug

    * Expr_functor in relax

    * function/dataflowblock Pass in python

    * testcases

    * reformat

    * fix Tensor annotation()

    * add return type hint

    * type hint

    * new test

    * fix typo

    * remove memo

commit 5199a206cc86cee9e43b0c8ddddf704acdc4b513
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Thu Apr 21 22:20:33 2022 +0800

    [Relax][MS] Task extraction with proper weights (#129)

    * [Relax][MS] Task extraction with proper weights (hzfengsy#32)

    * Add a unit test

    * Update the deduplication mapping / Update the unit test

    * Update test for DummyDB reusing

    * Remove unnecessary args

    * Remove unused import

commit badee2add6700f12671d3223e43875ca050f537a
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Wed Apr 20 17:09:37 2022 -0700

    [Relay Translator] Use OpStrategy for lowering (#130)

    * [Relay Translator] Use OpStrategy for lowering

    * Reflect feedback and fix lint issue

    * Consider contexts for PassContext, Target, .. for both pass application and lowering

commit 4454563d240c547fb762cec770502b1e09b195f0
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Wed Apr 13 21:00:54 2022 -0700

    Deprecate `[]` in favor `()` in Tensor annotation. (#123)

commit fab2d95697f7eecce90cb0ba12db2457caf4f2e3
Author: Yong Wu <yongcale@gmail.com>
Date:   Tue Apr 12 21:15:38 2022 -0700

    Add tune_relax to integrate with task scheduler (#127)

commit 39bab0d25f3e5bb48adf52534f2318149047f617
Author: Yong Wu <yongcale@gmail.com>
Date:   Tue Apr 12 16:22:33 2022 -0700

    Update autotir integration after rebase

commit caae30f06d237c3aebd00290802122bbfdb2ae26
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Tue Apr 12 08:23:32 2022 -0700

    [VM] Support sub function call and recursion. (#125)

    * Sub function call and recursion.

    * Address comment.

commit e7c7c15972f6aa29f30a167a794db17f74a6bdeb
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Tue Apr 12 14:18:32 2022 +0800

    [VM] Copy constant tensors to device (#124)

    * [VM] Copy constants to device (Hzfengsy#24)

    * [VM] Copy constants to device

    * Add unit tests

    * Specify shape and dtype for constant TE tensors in EmitTE

commit ef0a3e689b3896fd30a392d094beaa8d68b6de07
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Wed Apr 6 11:59:33 2022 -0700

    DataflowBlockPass (#114)

    * add DataflowBlockPass

    * update fma_rewrite

    * drop the skip function

    * update test_fma_rewrite with DataflowBlockPass

    * fix the format

    * fix name

    * rewrite test in tvm script

    * add non-dataflow Vars check

    * add fail testcases

    * module->IRModule

    * add docstring to DataflowBlockNode

    * remove unused pattern

    * Transform Pass->DataflowBlock Pass

    * rename global var to global scope var

    * remove print stmt

    * reformat tests

    * add docstring to DataflowBlockMutator

    * fix filename

    * minor fix

commit 2607f3b9112197045e773b0fc7ceb9ae57e844f8
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Apr 4 19:59:30 2022 -0700

    Remove type annotation from Var. (#121)

commit 969ffb4302f35344524ef36e74325c0d5e427b76
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Mon Apr 4 08:33:43 2022 -0700

    Add a new Expr to represent runtime dependent shapes. (#117)

    This can be used to represent runtime dependent shapes such as output of `unique` operator. Having explicit runtime dependent shape expression helps to distinguish the following two cases in AST - (1) shape has not been deduced (`shape_ = nullptr`), and (2) shape is runtime dependent. Previously both cases were mapped to `shape_ = nullptr`.

commit 1e2a11f6326c9b3fd3807bbe5d97e4a20ce9dadd
Author: Hongyi Jin <3231950289@qq.com>
Date:   Sun Apr 3 00:42:38 2022 +0800

    [PASS] Fold constant & Bind Params (#113)

    * fold constant and bind params

    * fix test

    * format

    * format

    * format

    * address comments

    * format

    * address comment

    * address comment

    * format

    * fix type bug

commit d441f1d0f2104b51287f9f29d9ec9f0e87f4b9d9
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Sat Apr 2 00:00:19 2022 -0400

    Temporary remove function type deduction in normalizer. (#119)

    * Temporary remove function type deduction in normalizer.

    This PR temporary removes the function type deduction in normalizer
    to unblock some of the followup passes that needs to check function
    type equality.

    Function's checked_type_ are left as nullptr for now.
    We should followup to add function type deduction from annotations.

    * revert the normalizer skip for now

    * comment out parser assert for now

commit 159f599248e3c6faf969198d4e7cf03c4f3f6c70
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Fri Apr 1 09:18:33 2022 -0700

    [BlockBuilder] Deduce and fill shape/type for Expr in Normalize. (#116)

commit 96c8bbc53286a0ca90ddcb92346156f23ab9efe3
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Mar 30 11:46:50 2022 -0700

    [CI] Enable GPU tests; Add AutoTIR cuda test. (#115)

    * Add gpu ci.

    * Update autotir gpu test.

commit 1e5c2dac7b01f73c7e3e1a8b092eb0f2b6cc5e28
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Mar 28 19:12:59 2022 -0400

    [FIX] Fix structure equal hash for MatchShape (#112)

    The pattern field of the match shape can define variables,
    as a result, we need to add DefEqual and Hash here.

    Added a regression testcase.

    Lesson: we would benefit from more testcases
    with check_save_roundtrip checks(like this one) for more relax example.

    Additional change:
    - Redirected TVMScript printer to be able to print relax fragements useful for debugging.

commit 8e466be1d1fa65b9df119e0563ef58c38e8562f2
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue Mar 29 01:30:07 2022 +0800

    introduce blockbuilder call_te (#110)

commit 6ff1614ac3c9e63ea5b615a072a1d26a197b58f9
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun Mar 27 00:02:53 2022 +0800

    [FIX] fix structural_equal_hash (#107)

    * fix structural_equal_hash

    (cherry picked from commit e7e962634999739a32129378f61cc95f58335447)

    * address comment & pass the ci

commit 31ed53c92192c74a3f55009e718b8ae0527ce078
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Fri Mar 25 10:49:00 2022 -0700

    [Bugfix] Fix call_tir parsing bug (#109)

    * Fix call_tir parsing bug.

    * update.

commit 3c7ff5a272d4b004b9b86b79e0f10c33635cea05
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Mar 24 19:50:27 2022 -0700

    [VM] Fix hardcoded device type in memory lowering (#106)

    * Add is_device field to attr.

    * Update.

    * Address comment.

    * update.

    * Update.

commit 6bcdcf8d02809dbbafbbd9515ea7ada17bb00077
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Thu Mar 24 23:04:11 2022 +0800

    [VM] Initialize VM through packed function (#101)

commit cfc779e732933eb43cb0bca6448c51fac51dc39f
Author: Yong Wu <yongcale@gmail.com>
Date:   Tue Mar 22 19:44:37 2022 -0700

    Fix after rebase

commit c368324831d378033d9b0f6621f3ee3b366624e6
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Tue Mar 22 18:51:40 2022 -0700

    Improve printer for DynTensorType and ShapeExpr (#97)

    * improve Printer for DynTensorType & ShapeExpr

    * add testcases

commit a861f2eeadc3ded5a98aa2947a6b17f077e29dc2
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Tue Mar 22 23:16:33 2022 +0800

    [VM][Refactor] Move VM files to TVM runtime directory (#98)

commit d96806093e9ff50aaf4d46a89d1003f87385bf7e
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Mar 21 12:03:59 2022 -0400

    [VM] Refactor and improve vm. (#96)

    * [VM] Refactor and improve vm.

    - Have a separate function for RunInstCall.
    - Cache func_index lookup by table to avoid repeative lookup by str.
    - Move PackedFunc call arg stack to Frame to increase locality and avoid re-allocation in repeative calls.
    - Make frame stack of unique_ptr to avoid frame re-allocation and copy during frame.resize.
    - Pass…
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Jan 19, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
masahi added a commit to masahi/relax that referenced this pull request Jan 21, 2023
commit 5bf9c8acf12dfba9865ac9f8480341298131dec4
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 16:10:16 2023 +0900

    clean up

commit 5506d92ed9a4c48c63f192ddcb576c9665d4ad5b
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 15:39:39 2023 +0900

    link and run compiled cutlass code, result correct

commit 81d39f84ebb1a7bcfe5c2fa9f97ce2130f932dbb
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 15:13:41 2023 +0900

    compile generated cutlass code

commit c2a68e14575c2711497347d5fc93d15b88c6c79b
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 17 07:47:31 2023 +0900

    codegen working

commit ba26344f85ebe43f88852c8c18b754bf03df1ce1
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 19:41:47 2023 +0900

    wip

commit ed3ac6d632a4798e411573f30d1a090bc05a96fc
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 17:53:10 2023 +0900

    wip

commit 47e09e54a0d405a14a602d7a6d31c49399c5662f
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 17:32:58 2023 +0900

    wip

commit b9e5df768b188de3dda1ef0d0f3db3fd592535d9
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Mon Jan 16 17:25:37 2023 +0900

    copy codegen_c base function

commit fe20e653ecf548f07432f06cd17395b554e6faa5
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Sat Jan 14 08:43:57 2023 +0900

    add cutlass stub

commit 990eec78b58ca259bc067bb32e4020f28d88b7c8
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Sat Jan 14 08:18:57 2023 +0900

    updated cutlass revision

commit 591a8f1ba62d9f8e923f2dcc1702e7e7590e92e2
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Sat Jan 14 08:02:01 2023 +0900

    conv2d + relu DNNL offload works

commit 1365402079626eab5bf99bad96dbfa4abd750175
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Fri Jan 13 16:35:49 2023 +0900

    starting DNNL codegen

commit 4a72e7810b0df31a4fb13856b5b6320ced4e978e
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Thu Jan 12 14:02:19 2023 +0900

    clean up

commit 61cc55e94123f3064e0d1200c70f33b4a537c4ad
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 10 16:26:31 2023 +0900

    pattern based partitioning working

commit 2433733c5458302cbe05e534d6c99bec13fb6d36
Author: Masahiro Masuda <masahi129@gmail.com>
Date:   Tue Jan 10 08:30:20 2023 +0900

    add conv2d match & run test

commit 360429440acb7068fdfd982d597523ebe032eb20
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Jan 9 17:20:05 2023 -0500

    [Op][O2e] Indexing and datatype operators (#338)

commit e45bdb73824d120bb3b848d4fdaa54f88211b509
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Jan 9 14:59:26 2023 -0500

    [VM] Supporting "compiled" exec mode. (#331)

    * [VM] Supporting "compiled" exec mode.

    This PR adds support of "compiled" mode to the VM.
    The compiled mode translate the relax function into TIR function
    and drive it through the TIR function.

    It is different from the micro AOT codegen, which generate TIR code
    that targets the micro C runtime environment and useful for resource
    limited settings with smaller set of features. Both leverages the
    low-level TIR build that is also shared with TensorIR.

    The current implementation targets full TVM (VM) runtime, that
    comes with PackedFunc, object, tuple, closure and all kinds of rich structure
    support. This also mean that we can leverage the full runtime support
    to handle things like allocation, dynamic shape, easy plugins and python
    interaction, which are not available in more limited runtime.

    The user directly use the same API to load the generated code regardless
    of compiled mode or bytecode. And just need to change one line

    ```python
    ex = relax.vm.build(mod, target, exec_mode="compiled")
    ```

    Most of the codegen features are lifted before the codegen phase,
    so the overall implementation would be around 500 loc for each exec mode
    and can be further cut down with future introduction of PrimValue.

    The simplicity is thanks to the TVM runtime archiecture that allows us
    to compose things together in objects. The only difference is how
    the PackedFunc of high-level driving is being provided.
    In the case of bytecode it is normal interpretation and in the
    case of compiled mode it is TIR.

    It is a complete implementation Unit-testcases are added. All codegen
    build tests are updated to include two exec_modes and have passed locally.
    The only exception that we skipped some special packedfunc handling(printing)
    because can be further simplified after we introduce PrimValue.

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>

    * Address review comments

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>

commit 32c2bf74eda5ff9cb958e6d54a29c324d53f2869
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Jan 9 13:45:14 2023 -0500

    [Op][O2d] Manipulation operators (#337)

    As tracked by #332, this PR is the O2d milestone of the high-level operator introduction plan.

    This PR introduces a few manipulation operators:
    * broadcast_to
    * concat
    * expand_dims
    * flatten
    * permute_dims
    * reshape
    * split
    * squeeze
    These operators are all well-tested.

commit b39d11a37c899a1625ecee0ffdacc5ef5444365f
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Jan 9 10:57:19 2023 -0500

    [O2h] Neural network and linear algebra operators (#343)

commit 1d6d897ec223cc07768e0382c3e21a196ffdfac8
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sun Jan 8 20:21:50 2023 -0500

    [O2g] Convolution, pooling and image operators (#341)

commit 95f784ece1d61676b88b5455be3dab5e3ddbc75a
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sun Jan 8 16:53:10 2023 -0500

    [Op][O2f] Set and searching operators (#339)

commit be1c32d817bbbbd56329378d6d929dce79ecb0f8
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jan 9 03:38:20 2023 +0800

    simple fix jupyter error reporting (#345)

commit da11e4bf373349ce4142949099e29d11655aa88b
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun Jan 8 23:09:22 2023 +0800

    [TVMScript] Symbolic shape computing (#342)

commit 80808fbf9a02480abf337b8a5edffe34c963feec
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sat Jan 7 18:31:00 2023 -0500

    [Op][O2c] Creation operators (#336)

commit 5efc8f7224f83766875e74669e139ec82119a504
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sat Jan 7 11:14:23 2023 -0500

    [TIR] Create Layout with specified axis dtype (apache/tvm#13663) (#340)

commit ae71be06c8252c211642abb9d5b3e4583bdb6f6a
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Jan 6 16:41:18 2023 -0500

    [Op][O2b] Statistical operators (#334)

commit 8220df74e339cdb6dab38a803b80edc3cd6b92e2
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Thu Jan 5 18:31:48 2023 -0500

    [Op][O1][O2a] Utility, arithmetic and comparison operators (#333)

    As tracked by #332, this PR is the kickoff part of high-level operator introduction in Relax.

    This PR is about the milestone O1 and O2a. Specifically, this PR
    * introduces some of common utility functions that the registration and StructInfo inference of each operator will often use.
    * introduces unary arithmetic operators: cos, log, negative, sigmoid, sin, sqrt, tanh.
    * refactors and introduces binary arithmetic operators: add, divide, floor_divide, multiply, subtract.
    * introduces binary comparative operators: equal, greater, greater_equal, less, less_equal, not_equal.

    These operators are well tested from three perspective:
    P1. the op getter can get correct op by name
    P2. their StructInfo inference result are as expected under all kinds of cases
    P3. Relax TVMScript parser can parse the scripts with the op inside

    For operators in O2a, most operators share almost the same StructInfo inference logic. Therefore, for tests in P2, in each category, not every op is tested in every case. For each case, it is good to have only part of op in this category tested. This is intended not to make overlarge testing file.

commit f1cab0a05f05829c4c35e2a7e613bd69f2a17fae
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Jan 5 20:43:28 2023 +0800

    [TVMScript] Ensure consistent struct info between assign lhs and rhs with sinfo annotation (#328)

    * [TVMScript] Ensure consistent struct info between assign lhs and rhs with sinfo annotation

    * fix

    * fix

commit dc7072efe290d7e8c69d8e216311510981fc82e1
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Jan 4 10:13:08 2023 -0500

    [REFACTOR] Hide VM Impl, Improve execution logic. (#326)

    * [REFACTOR] Hide VM Impl, Improve execution logic.

    This PR refactors VM by hiding most of the VM implementations
    and improve the overall execution logic.

    - Unifies PackedFunc and Closure Table.
    - Update Closure mechanism to no longer depend on string.
    - Update VMMemoryLower to VMBuiltinLower to incorporate more VM intrinsic lowering,
      move some of the codegen intrinsic to this phase.
    - Allow directly pass in function index as VM instruction.

    * Address comment

commit 2449d8c205f0b6e2c346132695b56039b07e9a10
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Tue Jan 3 22:04:16 2023 -0500

    [IR][ASTPrinter] Tweaks to AST printer's handling of struct info (#330)

commit 2d352807090ba1b7e898fbdcb83d6d9427c762cf
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue Jan 3 23:20:47 2023 +0800

    [TVMScript] Enforce `I.DeclareFunc` to have function signature (#329)

commit dcae50e836a0c2999f52d96a372fc7de584951f4
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Jan 2 15:21:49 2023 -0500

    [BACKEND] Refactor and introduce full match-cast support. (#324)

    * [BACKEND] Refactor and introduce full match-cast support.

    This PR refactors VMShapeLower to introduce full match-cast support
    that enables nested tuples, type checks at argument boundary
    and symbolic shape computation.

    Along the way we also refactors cleans up some of vm codegen logic
    and adding unit-tests for different stages.

    * address comments

commit a36920bf672d22e1d31e1e6f81d0447fd7a55806
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jan 2 23:31:04 2023 +0800

    [TVMScript] Fix empty TupleStructInfo (#327)

commit 80710a826bda66532eeda978668ed157b471b186
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Fri Dec 30 15:57:50 2022 -0500

    [CONTAINER] Hash/Equal/JSON support for ShapeTuple (#325)

    This PR add hash/equal/json support for shape tuple.

commit 343a1e7e2174612031c70ba8547577c7d21839e4
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Thu Dec 29 18:33:17 2022 -0500

    [REFACTOR] StructInfo M3: MatchShape=>MatchCast (#323)

    * Introduce match cast, and code changes along

    * add match_cast parser support (#9)

    * Match cast support for VMShapeLower CanonicalizeBinding

    * Remove `match_shape` (#12)

    * Refactor ExprVisitor/Mutator to consider Expr in StructInfo.

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>

commit e332285559d61db1c5033b8d50cd9d4af6c6b6f4
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Thu Dec 29 01:28:09 2022 -0500

    [REFACTOR] StructInfo M2: Cleanups on legacy shape related items  (#320)

    * [REFACTOR] Remove shape function

    * [WIP] Remove shape_, runtime_dep shape

    * Remove shape_ pass Compile

    * Remove RuntimeDepShape (#11)

    * BlockBuilder: remove CanProveShapeEqual, consolidate binding emit to EmitNormalize

    * Remove DimType, make get_shape_of API different from op.shape_of

    Changes the init importing to direct import so the VSCode nagivator
    can directly jump to the defintion point.

    * Apply suggestions from code review

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    * Clarify cases where struct info can be determinstically derived

    * Fix remaining testcases

    * Remove InferShape/Type per comment.

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit edadf247551f526188c0a08b3812ffc0a1f9d8bd
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 23 14:46:07 2022 -0500

    [Analysis] Optionally check structure info in well-formedness check (#321)

    With the introduction of structure info in #314, the well-formedness check will report malformed whenever an Expr doesn’t have defined structure info.

    However, when writing tests for well-formedness check and normalizer, usually we will manually construct the Exprs, which means their structure info are not defined most of the time. As a consequence, the well-formedness check will always complain “the Expr xxx doesn’t have structure info populated.” Therefore, when the checker fails to complain about the original reason of malformed, which means the checker is not working, the tests will still pass and we won’t be able to realize there is something wrong with the checker.

    Thus, in this PR we add an optional flag to the well-formedness check. In well-formedness tests, we will turn off the structure info check so that the original reason of being malformed will be revealed correctly.

    ---

    This PR also cleans up the DiagnosticContext parameter in the WellFormed API - the diag_ctx has been unused since the merge of #99.

commit d548459a1736378398ab773dce413d90d49376cf
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 23 07:33:25 2022 -0500

    [Op] Enforce int64 output shape in CallTIR (#322)

commit 10a87a455bbb84b0a0d20b22bd31784b9f4b9774
Author: Chaosfan <siriusneo@sjtu.edu.cn>
Date:   Fri Dec 23 08:03:48 2022 +0800

    [Bugfix] Handle function name properly in Relax TVMScript printer (#317)

    * remove relax_func_name_ and change logic

    * well_formed check for globalvar and gsymbol consistency

    * revise the logic in well_formed and update test

    * Remove `global_symbol` in test_function_attr.py

    * Update docs

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 29aebb9d24cbf52ab21fd98996633534301ef34d
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Dec 21 20:21:57 2022 -0500

    [REFACTOR] M1: Change parser/printer to only depend on struct info (#319)

    * [REFACTOR] StructInfo M1: Parser/printer/Var/Function to only depend on struct info field

    * Update src/relax/backend/vm/vm_shape_lower.cc

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    * Address comments

    * Allow function to have default value

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit e6173430f491c1d88d2ab77ce0ab43a8c602df30
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Dec 21 00:42:29 2022 -0500

    [REFACTOR][ARCH] Introduce StructInfo M0 (#314)

    * [IR] Introduce StructInfo

    * StructInfoFunctor and Analysis Support

    * [TVMScript] Parse type/shape annotation with StructInfo

    * remove runtime type assign

    * Remove type/shape during parsing (#2)

    * Normalizer prep: simple checks and legacy function renaming.

    * Struct info deduction in BlockBuilder.

    * Two TODOs

    * StructInfo Normalizer Fixes (#3)

    * StructInfo AST Fix

    * Fix Extern Func Deduction and shape mutator.

    * Update VoidStructInfo & globalvar (#4)

    * Fix passes and proper sinfo propagation.

    * Refactor EraseToWellDefined to Enable Remapping

    * [WIP] First stab at symbolic param tracking

    * Update EraseToWellDefined to support symbolic shape return (#5)

    * fix R.shape with ndim (#6)

    * Remove update shape/type

    * Address review comment, AnnotateTypeShape=>AnnotateStructInfo

    * Update include/tvm/script/ir_builder/relax/frame.h

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    * Address comments

    * Update printer to use structinfo (#7)

    * Update Error mechanism to prep for obj loc based reporting

    * Symbolic shape aware function call return value derivation.

    The main flow works as follows:
    - Match and populate shape_var_map and var_map by visit each pair of
      param and call arguments.
    - Call EraseToWellDefined to map the ret parameter to new result.

    * [ANALYSIS] Refactor well-form to only look at struct info.

    * Update comments according to reviews.

    * Update include/tvm/relax/struct_info.h

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Tianqi Chen <tqchen>
    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 151701740fac3a53b35799a82c85d86f91b720ee
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Fri Dec 16 17:48:26 2022 -0500

    Update relay_translator.py

commit ad0f3179a84b3bc167f91c3eb082cb996b1d04e2
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 16 17:37:00 2022 -0500

    [Translator] Remove global symbol and follow-up fix for #262 (#316)

    This PR removes the `global_symbol` linkage added by Relay Translator. It also fixes unaddressed comments of #262.

    All tests can pass locally and I believe it is safe to merge this PR directly.

commit 850deded1201001d833ac65991fb1a4c6509cb1b
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Dec 16 16:19:48 2022 -0500

    [Translator] Support translating op calls with Tuple input (#262)

    Previously, when a Relay function contains a Call which directly uses Tuples as arguments (the example below),
    ```
    %25 = (%23, %24) /* ty=(Tensor[(1, 160), float32], Tensor[(1, 160), float32]) */;
    %26 = concatenate(%25, axis=-1) /* ty=Tensor[(1, 320), float32] */;
    ```
    our Relay-translator is unable to generate corresponding CallTIR, because the translator always assumes a argument of a Call is mapped to a single tensor (see the code snippet below: the translator directly passes the Relax variable `new_args[-1]` to function `te_tensors`, which translate a Var to a single tensor).
    https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/python/tvm/relax/testing/relay_translator.py#L124
    https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/src/relax/ir/emit_te.h#L56-L61

    But in fact, the Relax variable may correspond to a Tuple of tensors, which wasn’t taken into consideration before. And such case can lead to error in `TETensor`, when creating tensors.

    Therefore, this PR fixes the issue by examine the Relax variable before the tensor creation of Relay Call arguments. If an argument has shape Tuple and type TupleType, we break down the tuple Variable and emit a TupleGetItem for each field, and meanwhile create a tensor for each field.

commit 54a0ff551adb90937073675b4fb3d5439b814398
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Fri Dec 16 21:02:13 2022 +0800

    Remove relax parser_v1 (#313)

commit b363dd48aced8fb939880db8cf595ed65b7ecc77
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Dec 14 22:51:38 2022 -0500

    [Debugging][Arch] Expose `shape_` fields for `TupleGetItem` and `If` nodes, fix AST printer accordingly (#311)

    * Make the shape of If and TupleGetItem nodes accessible in Python

    * Remove order-dependency from AST printer tests

    * Trailing whitespace

commit 4bb01fe4eccdd59614cc264838a389b21dd40388
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Dec 14 08:11:47 2022 -0800

    [IR] Dedicated Relax Call, Constant, Tuple, TupleGetItem, If (#306)

    * relax.Constant.

    * Add callnode;

    * Tuple, tuplegetitem, If

    * mypy.

    * lint

    * rebase & fix printer.

    * rebase & remove virtual_device_

    * address comments & leave todos.

    * address comments.

    * address comments.

    * tuple index.

    * type anno.

commit 4cda8a5881fd4cd2473258b35244fc4129b6110c
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Dec 14 09:09:03 2022 -0500

    [BlockBuilder][Refactor] Normalize nested `SeqExpr`s (#310)

    Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 5aab150f322526c1a7bfe6cea0f4d7a7543a7f46
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Tue Dec 13 17:06:06 2022 -0500

    [ExprMutator] No prologue in VisitWithNewScope when input is SeqExpr (#305)

commit 0bf1f1b784f19298117e36016a2e522f58c143fc
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Tue Dec 13 15:27:05 2022 -0500

    [REFACTOR] Refactor BlockBuilder (#308)

commit 28d598b6a7c55f95f8f9c2ccd5c860ba5451232d
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun Dec 11 01:28:56 2022 +0800

    [Normalizer] Combine Nearby Blocks in SeqExprs (#298)

commit e152c50e368454afab75425fcb0863b1c328bf4c
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Thu Dec 8 19:33:18 2022 -0500

    [ARCH] Add VisitBinding second-level dispatcher in Expr type. (#301)

commit fed6b8fc88b824ec68260417793447dbe524c4c3
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Dec 7 16:55:40 2022 -0800

    [Linkage] Cleanup global_symbol attachment and linkage. (#300)

    * Cleanup global_symbol attachment and linkage.

    * lint

    * Add global_symbol to the main function in translation.

commit e0907d4fd03af1731310647d3d0547bdff2cfaf6
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Tue Dec 6 21:35:20 2022 -0500

    [ARCH] Introduce NestedMsg to robustly handle nested-tuple analysis (#295)

commit 2eb99975dc1b40b83db7dcbb96b748503dcb3319
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Dec 5 21:57:21 2022 +0800

    [TVMScript] Update sccript printer to enable roundtrip tests (#291)

commit f8ab9890e14c2533c401969ebf11dd591beff592
Author: Hongyi Jin <3231950289@qq.com>
Date:   Sun Nov 27 09:59:26 2022 -0500

    [RUNTIME] Correctly handling export_module when exporting modules of different type (#13489)

commit 9009840e654a9900009f7776a19e26f29b1e3f85
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Fri Dec 2 18:33:50 2022 -0500

    [Debugging] Support PackedFuncType in the AST Printer (#289)

commit bda0e42f05eaba657c40a850486e55c39924f3bf
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Fri Dec 2 18:31:39 2022 -0500

    [IR][Bugfix] Improvements to the normalizer and well-formed checker (#288)

commit d5fe87b21546995c7a88905bd04b4e944d28a0f4
Author: Yong Wu <yongcale@gmail.com>
Date:   Thu Dec 1 20:00:38 2022 -0800

    Enforce i64 index in ShapeExpr (#281)

commit 9c9eb5585501a5da0f25ca38d7d3ac8269b6714c
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Dec 1 11:00:47 2022 -0800

    [Parser] Register memory operators to new parser. (#279)

commit 28c3f68cc51d2c22936c5496debcb8c2de54040b
Author: Yong Wu <yongcale@gmail.com>
Date:   Thu Dec 1 08:55:31 2022 -0800

    [TVMScript] enable the closure test (#280)

    * [TVMScript] enable the closure tests.

commit eb9d531b2565cdd000f46e5ecae2c45b9f589abe
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Dec 1 05:47:05 2022 -0800

    [Normalizer] Enforce all Expr have checked_type_ invariance after normalization. (#287)

commit 43f81ddf4afc2f4fdb214c9f994e844f53126cdb
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Mon Nov 21 19:25:43 2022 -0500

    [Debugging][Bugfix] Debug printer improvements: Print `shape_` and `checked_type_` for all nodes and handle non-binding `MatchShape`s (#261)

    The initial AST printer only included the `shape_` and `checked_type_` fields for variables because of the potential for infinite recursion (`shape_` nodes can contain other expressions, which in turn have `shape_` nodes). This PR cuts off the potential recursion to allow for printing these fields for all Relax expressions, which should be more useful for debugging.

    This PR also fixes a bug: The AST printer previously did not handle `MatchShape` bindings that did not bind a new variable.

commit 304048c33956dddb5027fec26541d57f903d8ca2
Author: YuchenJin <yuchenj@cs.washington.edu>
Date:   Thu Nov 17 17:02:11 2022 -0800

    Fix after rebase, and reorganize the TVMScript folder structure.

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>

commit e7277460f0a2c7c980be9323cdf7919dc38153e2
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Nov 17 00:31:32 2022 +0800

    [TVMScript] Switch to the new parser (#276)

    * [TVMScript] Support cross-function call for relax function

    This PR adds support for cross-function call for relax function, by declaring a function signature (i.e. an empty function that contains params and return type/shape but w/o body.)

    However, the PR meets the issue of block_builder shape deduction, which does not use function `ret_shape` to infer the shape of GlobalVar Calls.

commit 7152175762613130e3ba647c77cc9818312a5b06
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Nov 5 16:45:33 2022 -0500

    [CI] Enable Mypy type checking for Relax; Fix typing errors to pass Mypy checking. (#270)

commit 6f8f6da505b835345d7709d06bdfd8dddce7e85b
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Nov 3 08:16:35 2022 -0700

    Introduce memory primitives (#255)

    Introduce the memory primitives, including `relax.memory.{alloc_storage, alloc_tensor, kill_storage, kill_tensor}`.

commit 48b7c158cc01532f9019a2e615f2d94766a9464c
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Oct 20 08:30:47 2022 +0800

    [TVMScript] Update Type Annotation Behavior of the Parser (#269)

    This commit changes the behavior of the parser to allow type annotations, as suggested by the community.
    The current behavior:
    - Use the more refined type/shape between user annotated and deduced type/shape.
    The updated behavior:
    - Always use user annotations
    - Only checks if the type/shape is valid.

commit 5c3079bb6e1e4eeb4dc2d9b740facb2686c67519
Author: sung <sunggg@umich.edu>
Date:   Mon Oct 17 19:07:01 2022 -0700

    Reenable autotvm silencer; fix e2e_auto_tir.py; fix lint.

    Co-authored-by: YuchenJin <yuchenj@cs.washington.edu>

commit 85b81292626ab6f23caf2b61095a6f957b61b21c
Author: sung <sunggg@umich.edu>
Date:   Mon Oct 17 18:09:34 2022 -0700

    Recover: [Bugfix] Couple of bug fixes to run TVM-gen code together with BYOC (#249)

commit c46ae8566582f1fcd8fcda1479943d3abb95b3b0
Author: sung <sunggg@umich.edu>
Date:   Mon Oct 17 17:16:01 2022 -0700

    Recover: [Pass] Separate ApplyHistoryBest from tuning passes (#226)

commit 83bc7cb144643d5823bf06220186528923835667
Author: Junru Shao <junrushao1994@gmail.com>
Date:   Sun Oct 16 22:52:56 2022 -0700

    Enable Hexagon tests

commit f9f4f7904ec5468a725b2ba924a619a7c5ed4e43
Author: Junru Shao <junrushao1994@gmail.com>
Date:   Sat Oct 15 15:25:56 2022 -0700

    Recover dropped commits

    [TVMScript] B4: If branch support (#263)
    B8: Local Function Support  (#258)
    [TVMScript] B3: Type annotation checks (#256)
    [TVMScript][Parser] B1: Dataflow block (#252)
    [TVMScript] B2: match shape support (#251)
    [TVMScript] B6/B7: Symbolic shape and var shadowing  (#245)
    [TVMScript] B5: Support relax op (#244)
    [TVMScript] B0: Call_tir support (#243)
    enhance parser error reporting (#242)
    [TVMScript] A1: Relax Parser infra (#240)
    update ci image versions. (#241)
    [TVMScript] B2-4: TIR IRBuilder (#239)
    [TVMScript] A0: Relax IRBuilder infra (#235)
    [TVMScript] B5-6: TIR IRBuilder (#231)
    [TVMScript] B1: IRBuilder (#228)
    [TVMScript] New Parser: Part C (#218)
    [TVMScript] New Parser: Part A (#221)
    [TVMScript] New Parser: Part B (#217)

    Not recovered:
    [Pass] Separate ApplyHistoryBest from tuning passes (#226)
    [Bugfix] Couple of bug fixes to run TVM-gen code together with BYOC (#249)

    co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
    co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

commit 65a53034bc0bee9877a1bdf363c2eadcde35f226
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Thu Oct 13 23:06:55 2022 -0400

    [Op][Debugging] Add `assert` operator (#260)

    It was brought up that Relay lacks an assert operator, so we may as well have one in Relax for debugging. One issue is that we can't name it "`assert`" because Python will treat it as a syntax error to have it as a field name for the "`relax`" module, i.e., `relax.assert` is a syntax error. Thus the op is named "`assert_op`," which is not ideal but serves its purpose.

commit 71d96e6c0a314936fa49fd7bc1ea79069027ab12
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Oct 12 05:07:33 2022 -0700

    [Pass] Support Function and If in Normalize pass. (#268)

    * Support Function and If in Normalize pass.

    * Use structural equality for expr_memo_.

    * Change back to pointer equality for expr_memo_; Add more tests.

    * rebase.

commit 312a344cdeec66b1330a80d34ca78556fb338e7c
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Tue Oct 11 18:25:29 2022 -0400

    [Analysis] Expose analyses related to vars in Python (#265)

    Previously, analyses to gather up all variables, free variables, bound variables, all global variables, and all global variables that are called had been implemented in C++ but had not been exposed in Python or tested. This PR exposes these analyses and adds tests for them.

    Two further changes:
    * The analyses previously ignored variables bound in `MatchShape` nodes; these are now treated as bindings too.
    * `rec_global_vars` is renamed `called_global_vars`, since the analysis itself does not check recursion.

commit 132702be7e7ed0256045d7a405e532c3d5beef6d
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Mon Oct 10 18:19:38 2022 -0400

    [Expr] Allow annotating return shape on function nodes (#253)

    This PR adds a `ret_shape` field for specifying the shape of the function's return value. At present, we will not use this information, but by adding it into the AST, we will be able to parse the return shape and use it in the future. Parser V1 in this PR will just always list the `ret_shape` as `RuntimeDepShape`.

commit 7276c9e2ee13a4754775491ca36a7aae2d55b827
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Sat Sep 24 00:11:45 2022 -0400

    [Bugfix][VM] Properly convert tensor inputs in `save_function` (#257)

    It was observed that closures saved using `save_function` would crash when used over RPC with the `time_evaluator`, whereas using `set_input` and `invoke_stateful` worked as normal. While I am not entirely sure why these failures happened over RPC only in `time_evaluator` (but not in other RPC trials), it became clear that `set_input` performs a conversion of input tensor values in `SetInputTensorWithIndex`, while `save_function` was not doing this. Adding this conversion fixed the observed bug.

commit 7183c7ffbe896dd9b5f5742b62afe9c821dae682
Author: Josh Fromm <jwfromm@octoml.ai>
Date:   Wed Sep 21 17:07:08 2022 -0700

    [Call TIR] Fix bug when invoking call_tir with scalar values. (#254)

    This small PR changes a check in the tvmscript parser to support empty shape tuples which are used to represent scalars. I added a scalar addition test to make sure it works properly.

commit 605ba8d1548efb90980f9b18ea94f1d53f9ec3ec
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Sep 14 17:27:03 2022 -0400

    [Bugfix][Op] Register attributes for unique and print (#248)

    Attempting to use `dump_ast` on functions containing the operators `relax.unique` and `relax.print` previously crashed due to being unable to query their attributes' keys. It turned out that this was a problem with the operator attributes: They had not been registered on the Python side, so Python representation treated them as opaque TVM objects. This PR corrects this mistake.

commit f4525dd8a3e61f572b50107555cef4b469c971f4
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Sep 14 17:24:40 2022 -0400

    [VM][Benchmarking] Add option for saving e2e results as CSV file (#247)

    This PR makes some small additions to the end-to-end AutoTIR script, namely eliminating a bug (it was incorrectly using the stateful API) and adding an option to save the test results as a CSV file for benchmarking purposes (the data can then be separately analyzed as needed).

    These changes also required a small extension to the save_function method in the VM, namely allowing it to take keyword arguments.

commit f1ee4b6cd2c3ee0596cef6f5b7ff7e715fb4ae0d
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Wed Sep 14 17:23:29 2022 -0400

    [BugFix] Enable emit global MatchShape (#246)

    Fix an incorrect check which disables emitting global MatchShape outside a dataflow block and mistakenly enables emitting dataflow MatchShape outside a dataflow block.

commit 0a7a0a9daf5f1a2fa06ee6cd6169a28d397821fa
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Thu Sep 8 09:49:05 2022 -0400

    [Pass] Canonicalizing Bindings (#233)

    It may be useful for some passes to collapse chains of definitions, particularly after other compiler transformations that may reduce or simplify some expressions.

    This pass will take chains of definitions and replace references to later definitions to the original one. It works by checking `LookupBinding` for each var use-site and replacing the var with its definition if the definition was another var. (Note: This required updating `BlockBuilder` to also update its binding map for `MatchShape` nodes; that was arguably a bug.) Additionally, `MatchShape` bindings where the `LHS` and the `RHS` are guaranteed to match at compile time are canonicalized into ordinary `VarBinding`s.

commit 7a6f91f7d4077eebf926aa1f19281404494b9362
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu Sep 1 07:02:57 2022 -0400

    [Hexgaon] Use uploaded path to load module. (#238)

    * Fixes a bug to use the uploaded file remote path for loading the module
    remotely.

    * Modifies the task_python_hexagon.sh script to only run passing test
    on device. This is used by Jenkins CI.

commit e50290140c204ae091e335b797a07f2f6567a163
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Aug 18 21:51:35 2022 -0700

    [Pass] New Python ExprVisitor/ExprMutator! (#190)

    Add decorators `visitor` and `mutator` to help users create `ExprVisitor` and `ExprMutator` in Python. Users can customize visit/rewrite/post-order-rewrite function in Python.  `PyExprVisitor` and `PyExprMutator` lists the functions users can customize.

commit 7313855476cc522bf3e8bdbe7a60b82cd725fe4c
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Thu Aug 18 15:20:06 2022 -0400

    [BugFix] Expose `relax.expr.Constant` to `relax.Constant` (#230)

commit cdfd4e939f2d1e88c560a05d83ddf2f7afe70304
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu Aug 18 02:25:13 2022 +0800

    [FIX] Fix windows build issue when allocating a dynamic array (#219)

    In the current codebase, kNumArgs is a runtime-dependent variable (i.e. its value depends on the input shape of Array).

    Allocating arrays with runtime values is not allowed during building on Windows (I'm surprised it can be compiled on Linux and macOS)

commit 887762cd97686ae23a61609ca9ffc8d6a2c5178b
Author: Yong Wu <yongcale@gmail.com>
Date:   Mon Aug 15 08:00:31 2022 +0800

    Update with rebase

commit 5a23346bc437043b48866411e39dfcf066edda59
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sun Aug 14 14:44:12 2022 -0700

    [Bugfix][VM] Fix var binding to a ConstantNode; Force VM if.cond register to take an NDArray instead of POD. (#216)

    Fix the bug in #212. The cause of this bug is VM Codegen did not handle binding ConstantNode to variable (`x = relax.const([1, 2])`) and save the constant NDArray to the register. Previously the codegen only handles the case where ConstantNode as CallNode's arguments. Now it's fixed and unit test is added.

    Fix the bug in https://github.com/tlc-pack/relax/issues/214#issuecomment-1211411432, the issue was caused by the VM simply read the condition register of the If instruction, and expect it to be a POD int or bool. https://github.com/tlc-pack/relax/commit/811e877c289fa52f55886c8a3e8dce10ed84915f adds a `LoadScalarInt` function similar to the Relay VM to check the If.cond register stores an NDArray, and cast it to int_64. Since we haven't introduced PrimValue and PrimType (that represents POD values like int and bool) to the Relax language yet, let's enforce `If->cond` to be a Tensor (NDArray at runtime).

commit 6c9d403503297a0d0e28318bafcba9fc9c99ae42
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Fri Aug 12 13:53:28 2022 -0400

    [VM][UX] Allow for saving closures to avoid extra dictionary lookups in timing trials (#208)

    This PR implements a function that allows for saving a `PackedFunc` in the VM's module that just calls an existing function with a specific set of arguments to address #179 and #178. The main use of this is for timing, to avoid some overhead in looking up functions.

commit e172b40af31dc3384adbcf6e7b0bce7f31ce41ea
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Thu Aug 11 19:55:57 2022 -0500

    [Pass][UX] Statement rewriter for DataflowBlock (#210)

    - Implements a few APIs to quickly perform statement-level mutation: `add`/`remove_unused`/`remove_all_unused`/`replace_all_uses`.
    - Implemented `remove_all_unused` to remove dead statements inside `DataflowBlock` cc: @psrivas2
    - Address minor issues (unnecessary headers and bad docstrings) in https://github.com/tlc-pack/relax/pull/163

commit 37791e0a5d4a495365fd647f2cecbed16f3a3785
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Thu Aug 11 13:50:56 2022 -0500

    Clean warning messages by Clang and Pylint (#215)

    * refact: clean clang warning in relax

    * refact: fix pylint

    * fix cpplint and clangd suggestions

    * fix: no cpplint on virtual-override

commit 0b00715dc634aa7f091e942a54a29ee9c802ccf9
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Aug 10 11:47:37 2022 -0400

    [VM][UX] Implement stateful API (#207)

    This PR implements the stateful API discussed in https://github.com/tlc-pack/relax/issues/179. It ensures that if you use `set_input` to set inputs, you must use `invoke_stateful` to run the function (otherwise failing) and must obtain the results using `get_output`. It handles nested tuple returns.

commit ed7b77e040654582d1ab1b9535ebbc4da77da243
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Tue Aug 9 17:07:52 2022 -0400

    [Op][Debugging] Add a print operator (#201)

    * Attempt at adding a print operator

    * Fix the registration

    * Actually use the format string

    * Improve test

    * Fix comment placement

    * Improve the docstring for relax_print

    * Handle tuples too

    * Formatting :(

    * Correct commit message

    * Match attr name across Python and C++

    * Make print variadic

commit a9bd3053c1106d1926fce1dc5787fc8be27f3985
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Fri Aug 5 11:45:03 2022 -0400

    [Pass] Implement legacy lowering pass that leverages relay op strategy (#189)

    This PR implements Relax Op lowering that leverages existing Relay Op Strategy (legacy).
    As ops like conv2d, matmul are relay-, relax- independent, this pass assumes that we can always find relay op equivalents for such relax ops and use their info to leverage the relay op strategy.

commit 1a1bcf75d97b2e7e4f758b6cd08bd747b222ef36
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Thu Aug 4 17:56:17 2022 -0400

    [Pass] Introduce metaschedule as a tuning pass (#188)

    This PR delivers MetaSchedule tuning as a tuning passes.
    We can either tune at IRModule level with relax.transform.MetaScheduleTuneIRMod or tune at primfunc level with relax.transform.MetaScheduleTuneTIR.

commit 7144654633477ea0d2bff300ba753dc8bfdeae4d
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Thu Aug 4 14:34:10 2022 -0400

    [Example][UX] Make the RPC timeout configurable in the `e2e_auto_tir` example (#186)

    Running the e2e_auto_tir example over RPC can run into issues due to timeouts because some models can take a long time to run on some machines. This PR makes the RPC timeout configurable to more easily address these issues.

commit 81e565e5df90cfe12d22deb7b26845ea3aa13526
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Wed Aug 3 19:38:21 2022 -0400

    Fix BlockBuilder Scope Recovery in Misuse (#199)

    This happens in interactive usecases. When function scope
    exit triggers an error, we need to recovery the
    BlockBuilder.current properly so users can try again.

commit 21b1e7dc35dc838214cd4b6f26fbc31492323b02
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Wed Aug 3 19:09:21 2022 -0400

    [Testing][AST] Add a simple AST printer for debugging (#198)

    * Add ast printer

    * Print seq expr body

    * Match annotation field names to real AST

    * Handle call attrs and func ret types

    * Add more advanced test cases

commit 89f55c8167a80b4b9c8751309b5db648fb4db047
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Wed Aug 3 09:59:47 2022 -0500

    [UX] Adopt changes from tvm-main and render code with IPython.display (#192)

    Render code with IPython.display.HTML if possible to fix the ansi-escape 24-bit rendering issue in Colab.

commit 0b52b558eb14b3f113a4b543c8f0a824baaa58bc
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Mon Aug 1 11:59:24 2022 -0500

    Dataflow Pattern Lang: Core Matching Features (#163)

    The structure is similar to the Relay's pattern matcher (https://github.com/apache/tvm/pull/5231). The main difference is that those pattern types are adopted to be relax-compatible. Relay pattern types, some less used patterns (IfPattern) and df-topological patterns (DominatorPattern) are ignored (some of them will be brought later).

    The implementation splits patterns into two parts:
    - **Match an Expression**: match an expression syntactically (`MatchExprPattern`, i.e., `DFPatternMatcher`);
    - **Match a Graph**: match a graph (cross multiple `VarBinding`) topologically (`MatchGraphPattern`);

commit 74371634e9a011e63650b734aba20546b016c524
Author: Jiawei Liu <jaway.liu@gmail.com>
Date:   Tue Jul 26 20:06:25 2022 -0500

    [UX] Highlight TVMScript with Pygments (#185)

commit 15e54ef215950944ffd74858c12c30aabcb0dcce
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sat Jul 23 11:22:13 2022 +0800

    [Pass] Enhance BindParams to take numpy dict as input (#184)

commit cf2e3b97110c805597059c5ba8303a653417e080
Author: Steven S. Lyubomirsky <slyubomirsky@octoml.ai>
Date:   Mon Jul 18 21:45:21 2022 -0400

    [Bugfix][VM] Ensure set_input works over RPC by not returning an array of argument names (#183)

    Currently, attempting to use the VM's `set_input` method will fail over RPC because `set_input` calls `get_func_param_names`, which returns an array of parameter names. RPC does not support sending arrays. This PR corrects this issue by instead having `set_input` query the function arity and then query the argument names one by one, which is the approach taken by the Relay VM (accordingly, the names for the functions used to do this, `get_function_arity` and `get_function_param_name`, are taken from the Relay VM).

    This PR also adds a unit test over RPC on localhost.

commit b0e57dbc0862499c3f2a7d91858354c41fcf5e95
Author: Yong Wu <yongcale@gmail.com>
Date:   Fri Jul 15 11:50:29 2022 -0700

    Fix after rebase

commit 3494b7a47bf0f7c3219538b2e9064b825cf3258c
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Mon Jul 18 00:38:41 2022 -0400

    [Pass Infra] Tuning API serialization and database support (#168)

    * refactor tuning API to support serialization of Choice, Knob, Trace

    * Implement tuning api JSON database

    * Add comments

    * fix pylint

    * fix cpplint

    * reflect feedback

    * add minor comment for the future work

commit 777549a6037cc97b698f53ed629cf65c33ae7eca
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jul 18 00:05:14 2022 +0800

    [Fix] fix windows build issue (#182)

    TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS is needed when we have a default-like constructor (e.g. (Span span = Span()))

commit b81e6a9838f92ba412a0bd4951a46cc61a43a22d
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Mon Jul 18 00:04:03 2022 +0800

    fix print twice issue (#181)

commit d4cc79ed664bbe34a4d9dab2923cd5a7a7c5b52c
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Jul 14 09:15:44 2022 -0700

    [Pass] Python ExprMutatorBase/ExprMutator (#172)

    - Rewrite ExprFunctor in Python. New ExprMutatorBase and ExprMutator in Python.
    - Implement demo passes: RewriteFMA and FuseFMA with Python ExprMutator.
    - Expose some functions to ffi in block_builder.py

commit 01cdc4d43258b1fb9dcc630f05f38f792e3bc513
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Tue Jul 12 19:25:51 2022 -0400

    [VM] Deprecate API to save/load executable to file (#176)

    Executable `save_to_file` and `load_exec_from_file` API was used to
    save/load just the executable to/from file. This was confusing as it did
    not export the TensorIR kernels in the Relax Module, thus leading to
    bugs such as https://github.com/tlc-pack/relax/issues/175.
    Moreover, the API was only used in some tests, and not useful for end
    user.

    Deprecating this API to have a single uniform way of
    serializing/deserializing TVM IRModule using `export_library` and
    `tvm.runtime.load_module` API.

commit 74b3d67e8ae74aed3446a5ae5a05b8f5586e2c3b
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Fri Jul 1 09:31:30 2022 -0700

    [Refactor] Generic dispatching for `IsBaseOf`; Simplify Type/Expr initializations; `relax` -> `R` in printer; Disallow local function in VMCodegen (#171)

    - Generic dispatching for `IsBaseOf`: `IsBaseOf` uses a bunch of if-else to check if the subtype relation between the base type and derived type, now it's changed to use a generic TypeFunctor to dispatch on the base class to do the check.
    - Simplify Type/Expr initializations: We had to write `RuntimeDepShape(Span()`), `ObjectType(Span())` to initialize several Types and Exprs, this is due to the `TVM_DEFINE_OBJECT_REF_METHODS` macro that sets the constructor with `= default`. By changing to use `TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS`, we can now just write `RuntimeDepShape()` without specifying an empty span.
    - `relax` -> `R` in printer: Change to print `R` rather than `relax` in TVMScript as the default behavior. This is consistent with our test cases and TIR convention: using `T` as shorthand.
    - Disallow generating code for local function in VMCodegen: these local functions should have been lifted in the lambda lifting pass before codegen.

commit 8fdc3ba3eae0d1ffc535e240be251aaae5546eb8
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu Jun 30 15:14:40 2022 -0700

    [Parser] Enable R.parser.pretty_print to print TIR PrimFunc (#174)

    This way we can have a uniform API to print IRModule, TensorIR
    function and Relax functions.

commit ed0414540c9fbc063aa727cfc71bdee51a4bafdd
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Wed Jun 29 08:20:17 2022 -0700

    Update tests to use `set_input` for rpc calls. (#173)

    Fix relax-hexagon tests to use set_input api, which is the correct way to invoke a function over RPC.

commit 1f962bda7a79d13fee1a4f9f4ad3ddde4f5467b2
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Tue Jun 28 20:49:33 2022 -0400

    [BYOC][PASS] Prototype implementation of modular compilation w/ TensorRT (#164)

    This PR delivers the prototype of the followings:
    - Relax BYOC JSON codegen
    - Relax BYOC TensorRT codegen
    - Extension in Relax VM to support external modules
    - `RunCodegen` pass: run codegen for the annotated relax functions
       - Annotation (dispatch decision) will be done by earlier passes  e.g., greedy heuristic, Collage
       - The generated runtime module and Codegen itself should be tvm object
    - Misc minor code improvement for other passes

commit f25fe0c80670272582db3aa791901c7fa49fc59e
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Tue Jun 28 12:47:07 2022 -0700

    Run static/dynamic models over Hexagon using Relax VM RPC (#167)

    * Move Relax VM builtins to src/runtime.

    * This fixes a bug we encountered while loading the module for hexagon.
    Since it was building the minimal runtime it was missing definition
    of Relax VM builtins.

    * Mark Hexagon module as DSO exportable.

    * Load Relax VM Executable over RPC

    * Support allocation for shape heap on device

    Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>

commit 25174be634b5e04f0468b48bd477f22b17e75f84
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Fri Jun 24 13:33:04 2022 -0700

    [CI] Enable Hexagon CI in Jenkins. (#169)

    Running all Hexagon tests in simulator is very slow. So we only run
    Relax related hexagon tests `test_relax_integration.py`.
    This test file is empty right now and it would be
    populated as we push relax-hexagon related changes.

commit 225aecdb5d7d33f2af048f3aef9c9a6ac758f4fd
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Jun 23 09:47:30 2022 -0700

    [VM] Add set_input interface; Fix e2e tuning script. (#166)

    * Add set_input interface.

    * Address comments.

commit 29a707cbd9be6e02dd8a3cd1961cfb53057eb51b
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Jun 16 09:07:45 2022 -0700

    WellFormed Instrument (#165)

    * add conftest for test/python/relax

    * [Wellformed Check]: allow TupleType as Function parameters

    * move WellFromedInstrument to relax.ir.instrument

    * add header

commit b4c3c4bb65b09db7c9b3ec114d6680d14f306d37
Author: Yong Wu <yongcale@gmail.com>
Date:   Sat Jun 11 23:26:17 2022 -0700

    Update after rebase

commit 3c0e3c0ee08c78b17cc1ba0429727c199737403a
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Jun 11 18:42:29 2022 -0700

    [Relay translator] Allow replacing default topi function with user-provided TIR PrimFunc. (#159)

    * Add replace_op_with_tir to translator.

    * came up with a better name

    * better doc.

commit f250f93eed886dc2c3a1cb1f8a4ab2077c57080e
Author: Yong Wu <yongcale@gmail.com>
Date:   Sat Jun 11 15:20:21 2022 -0700

    [Pass] Lambda Lifting (#99)

commit b55fd31d4e11373b30a93f88412a3d6e2d21d3c1
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue Jun 7 10:07:17 2022 +0800

    [E2E] End-to-End tuning e2e_script (#153)

    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>

commit d3f94e73ec7b9c9ac7b3675f962e9030e55fa603
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu Jun 2 08:19:18 2022 -0700

    Fix shape lowering pass bug for non i64 dims. (#152)

    Prior to this change, VM Shape Lowering pass did not cast integer values
    to shape heap dtype (i64) which resulted in incorrect values when read
    from heap later. This PR adds a cast to i64 for such values.
    This also adds well-formed check to ensure shape dimensions are of
    integer types.

commit 9cf777f48069d598eda276be0b9aabaf301acf0f
Author: Yong Wu <yongcale@gmail.com>
Date:   Wed Jun 1 17:52:40 2022 -0700

    [Parser] Add FuncType support (#154)

    * [Parser] Add FuncType support

    * Address comments

commit f99121d506df45870cd026e052f5b3c41d4bd982
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Wed Jun 1 09:01:40 2022 -0700

    [PASS] Remove Unused Functions in IRModule (#151)

commit a718e9f9e073ca0ea1790562254c09aaa863eaa4
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Tue May 31 15:15:28 2022 -0700

    [Pass Infra] Tuning Pass API (#144)

commit a485b7bdb45f8379daa45e8c923a47fd6871cbdf
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Sun May 29 12:51:07 2022 -0400

    [REFACTOR] Move TIR op kind analysis to relax as it is relax oriented (#155)

    This also keep TIR mostly independent from higher-level IR.

commit abd20bdc9b87aa53e0c27e8c5c3fc195be5e8c91
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun May 29 23:31:05 2022 +0800

    add test cases for FuseTIR (#156)

commit de42ec3d5ae0f0304060460764619a5a16995a33
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Thu May 26 22:14:51 2022 +0800

    [Pass] Relax Transform FuseTIR (#150)

    * [Pass] Relax Transform FuseTIR

    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>

commit 153d0cc8f2d39b23e63fcd6feaf9755a0eaf8c28
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed May 25 15:44:59 2022 -0700

    [Mutator] Separate unnormalized-form and normal-form mutators (#148)

commit dfa42c09a3087605e805526ab7db7b49d6752ca5
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Fri May 20 16:30:18 2022 -0700

    Print/parse tir cast/max operations in Relax shape (#149)

    tir.cast and tir.max are commonly used operators in shape expression in
    Relax. These two operators often show up when importing Relay module
    with `Any` dims to Relax module.

commit c7186fd44ad5865d84ac61fc2981a15c8af9be4c
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Thu May 19 18:29:12 2022 -0700

    Add support to import relay models with Any dim. (#146)

    Converts Relay Any dimension to symbolic dim in Relax.

commit ef9cf6baba1c2f7215746459ad5a9193df6572c9
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Tue May 17 07:55:56 2022 -0700

    Refactor shape lowering pass and Blockbuilder. (#145)

commit 230def2284c21eaff520e58fa96a80313b6a7c8f
Author: Yong Wu <yongcale@gmail.com>
Date:   Fri May 13 14:30:05 2022 -0700

    Support Closure (#140)

commit 0e998988aabdeb8d913e2889eb5a9d72bee35ca2
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu May 12 17:13:15 2022 -0700

    [Analysis] IRModule well-formed check (#142)

commit 1bd4e685ffcc0c4b677af47ecc8609dbfacdfd9d
Author: Yong Wu <yongcale@gmail.com>
Date:   Wed May 11 09:31:13 2022 -0700

    Change after rebase

commit d0ad35b375449c7e067a1edada7502557a03dd26
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue May 10 08:44:22 2022 +0800

    FuseOps for relax (#141)

    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>

commit ae7b5b79c40498203842b6c9193e91bcc1937bea
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Wed May 4 20:52:16 2022 -0700

    Add `relax.unique` operator in Relax. (#135)

    * Add Unique operator in Relax.

    This adds the functionality to register a packed function implementation of
    any operator using `FCallPacked` attribute. The relax operator would be
    lowered to a call to the registered packed function during codegen.
    For example, in this change relax.unique is lowered to
    `relax.run.unique` packed function which uses torch.unique under the
    hood.

    * Add support for integer constants in Relax VM.

    This adds serialization, deserialization, and print support for
    integer constants.

commit 1ca18611ae59ab4d1667066ed9921690d2a5611c
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue May 3 09:34:55 2022 +0800

    Add ShapeType to ShapeExpr.checked_type during construction (#139)

commit 6481d533ed259a080dede704f7443c4a2221a842
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Mon May 2 16:26:08 2022 -0700

    Introduce Relax function attribute and drop name field in Relax function (#136)

commit d735ebd719d89c804691b29ee0d881c785384fc6
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Apr 30 18:45:14 2022 -0700

    [BlockBuilder] Sub function call shape deduction: constant shape case. (#137)

commit 10f8e56cbcb27beb373075e3c6e3a9728ffb5eb2
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Apr 28 16:59:38 2022 -0700

    [AST][Type] Introduce ObjectType; Infer the type of call_packed by type_args; Refactor InferType/InferShape. (#132)

commit 7e2038a8b662659dd6ba2e2a86bedbc6c3891bfa
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Apr 25 17:20:19 2022 -0700

    [AST][BlockBuilder] Normalize relax.Function; Refactor BlockBuilder to take optional input IRModule. (#133)

commit f1eca6d74365c6b0665b64c86ececce86fd76df3
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Sun Apr 24 07:09:11 2022 -0700

    [Printer][Parser] Modify Tensor annotation printing and parsing. (#128)

commit 296876eaf1246ea7948c69d2111cfea2ca51ca0c
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Fri Apr 22 08:05:13 2022 -0700

    [Pass] Python pass decorator and ExprFunctor (#126)

    * Relax ExprFunctor in Python

    * fix the register bug

    * Expr_functor in relax

    * function/dataflowblock Pass in python

    * testcases

    * reformat

    * fix Tensor annotation()

    * add return type hint

    * type hint

    * new test

    * fix typo

    * remove memo

commit 5199a206cc86cee9e43b0c8ddddf704acdc4b513
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Thu Apr 21 22:20:33 2022 +0800

    [Relax][MS] Task extraction with proper weights (#129)

    * [Relax][MS] Task extraction with proper weights (hzfengsy#32)

    * Add a unit test

    * Update the deduplication mapping / Update the unit test

    * Update test for DummyDB reusing

    * Remove unnecessary args

    * Remove unused import

commit badee2add6700f12671d3223e43875ca050f537a
Author: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Date:   Wed Apr 20 17:09:37 2022 -0700

    [Relay Translator] Use OpStrategy for lowering (#130)

    * [Relay Translator] Use OpStrategy for lowering

    * Reflect feedback and fix lint issue

    * Consider contexts for PassContext, Target, .. for both pass application and lowering

commit 4454563d240c547fb762cec770502b1e09b195f0
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Wed Apr 13 21:00:54 2022 -0700

    Deprecate `[]` in favor `()` in Tensor annotation. (#123)

commit fab2d95697f7eecce90cb0ba12db2457caf4f2e3
Author: Yong Wu <yongcale@gmail.com>
Date:   Tue Apr 12 21:15:38 2022 -0700

    Add tune_relax to integrate with task scheduler (#127)

commit 39bab0d25f3e5bb48adf52534f2318149047f617
Author: Yong Wu <yongcale@gmail.com>
Date:   Tue Apr 12 16:22:33 2022 -0700

    Update autotir integration after rebase

commit caae30f06d237c3aebd00290802122bbfdb2ae26
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Tue Apr 12 08:23:32 2022 -0700

    [VM] Support sub function call and recursion. (#125)

    * Sub function call and recursion.

    * Address comment.

commit e7c7c15972f6aa29f30a167a794db17f74a6bdeb
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Tue Apr 12 14:18:32 2022 +0800

    [VM] Copy constant tensors to device (#124)

    * [VM] Copy constants to device (Hzfengsy#24)

    * [VM] Copy constants to device

    * Add unit tests

    * Specify shape and dtype for constant TE tensors in EmitTE

commit ef0a3e689b3896fd30a392d094beaa8d68b6de07
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Wed Apr 6 11:59:33 2022 -0700

    DataflowBlockPass (#114)

    * add DataflowBlockPass

    * update fma_rewrite

    * drop the skip function

    * update test_fma_rewrite with DataflowBlockPass

    * fix the format

    * fix name

    * rewrite test in tvm script

    * add non-dataflow Vars check

    * add fail testcases

    * module->IRModule

    * add docstring to DataflowBlockNode

    * remove unused pattern

    * Transform Pass->DataflowBlock Pass

    * rename global var to global scope var

    * remove print stmt

    * reformat tests

    * add docstring to DataflowBlockMutator

    * fix filename

    * minor fix

commit 2607f3b9112197045e773b0fc7ceb9ae57e844f8
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Apr 4 19:59:30 2022 -0700

    Remove type annotation from Var. (#121)

commit 969ffb4302f35344524ef36e74325c0d5e427b76
Author: Prakalp Srivastava <prakalp@octoml.ai>
Date:   Mon Apr 4 08:33:43 2022 -0700

    Add a new Expr to represent runtime dependent shapes. (#117)

    This can be used to represent runtime dependent shapes such as output of `unique` operator. Having explicit runtime dependent shape expression helps to distinguish the following two cases in AST - (1) shape has not been deduced (`shape_ = nullptr`), and (2) shape is runtime dependent. Previously both cases were mapped to `shape_ = nullptr`.

commit 1e2a11f6326c9b3fd3807bbe5d97e4a20ce9dadd
Author: Hongyi Jin <3231950289@qq.com>
Date:   Sun Apr 3 00:42:38 2022 +0800

    [PASS] Fold constant & Bind Params (#113)

    * fold constant and bind params

    * fix test

    * format

    * format

    * format

    * address comments

    * format

    * address comment

    * address comment

    * format

    * fix type bug

commit d441f1d0f2104b51287f9f29d9ec9f0e87f4b9d9
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Sat Apr 2 00:00:19 2022 -0400

    Temporary remove function type deduction in normalizer. (#119)

    * Temporary remove function type deduction in normalizer.

    This PR temporary removes the function type deduction in normalizer
    to unblock some of the followup passes that needs to check function
    type equality.

    Function's checked_type_ are left as nullptr for now.
    We should followup to add function type deduction from annotations.

    * revert the normalizer skip for now

    * comment out parser assert for now

commit 159f599248e3c6faf969198d4e7cf03c4f3f6c70
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Fri Apr 1 09:18:33 2022 -0700

    [BlockBuilder] Deduce and fill shape/type for Expr in Normalize. (#116)

commit 96c8bbc53286a0ca90ddcb92346156f23ab9efe3
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Wed Mar 30 11:46:50 2022 -0700

    [CI] Enable GPU tests; Add AutoTIR cuda test. (#115)

    * Add gpu ci.

    * Update autotir gpu test.

commit 1e5c2dac7b01f73c7e3e1a8b092eb0f2b6cc5e28
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Mar 28 19:12:59 2022 -0400

    [FIX] Fix structure equal hash for MatchShape (#112)

    The pattern field of the match shape can define variables,
    as a result, we need to add DefEqual and Hash here.

    Added a regression testcase.

    Lesson: we would benefit from more testcases
    with check_save_roundtrip checks(like this one) for more relax example.

    Additional change:
    - Redirected TVMScript printer to be able to print relax fragements useful for debugging.

commit 8e466be1d1fa65b9df119e0563ef58c38e8562f2
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Tue Mar 29 01:30:07 2022 +0800

    introduce blockbuilder call_te (#110)

commit 6ff1614ac3c9e63ea5b615a072a1d26a197b58f9
Author: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Date:   Sun Mar 27 00:02:53 2022 +0800

    [FIX] fix structural_equal_hash (#107)

    * fix structural_equal_hash

    (cherry picked from commit e7e962634999739a32129378f61cc95f58335447)

    * address comment & pass the ci

commit 31ed53c92192c74a3f55009e718b8ae0527ce078
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Fri Mar 25 10:49:00 2022 -0700

    [Bugfix] Fix call_tir parsing bug (#109)

    * Fix call_tir parsing bug.

    * update.

commit 3c7ff5a272d4b004b9b86b79e0f10c33635cea05
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Thu Mar 24 19:50:27 2022 -0700

    [VM] Fix hardcoded device type in memory lowering (#106)

    * Add is_device field to attr.

    * Update.

    * Address comment.

    * update.

    * Update.

commit 6bcdcf8d02809dbbafbbd9515ea7ada17bb00077
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Thu Mar 24 23:04:11 2022 +0800

    [VM] Initialize VM through packed function (#101)

commit cfc779e732933eb43cb0bca6448c51fac51dc39f
Author: Yong Wu <yongcale@gmail.com>
Date:   Tue Mar 22 19:44:37 2022 -0700

    Fix after rebase

commit c368324831d378033d9b0f6621f3ee3b366624e6
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Tue Mar 22 18:51:40 2022 -0700

    Improve printer for DynTensorType and ShapeExpr (#97)

    * improve Printer for DynTensorType & ShapeExpr

    * add testcases

commit a861f2eeadc3ded5a98aa2947a6b17f077e29dc2
Author: Ruihang Lai <lairuihangdongdong@qq.com>
Date:   Tue Mar 22 23:16:33 2022 +0800

    [VM][Refactor] Move VM files to TVM runtime directory (#98)

commit d96806093e9ff50aaf4d46a89d1003f87385bf7e
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Mon Mar 21 12:03:59 2022 -0400

    [VM] Refactor and improve vm. (#96)

    * [VM] Refactor and improve vm.

    - Have a separate function for RunInstCall.
    - Cache func_index lookup by table to avoid repeative lookup by str.
    - Move PackedFunc call arg stack to Frame to increase locality and avoid re-allocation in repeative calls.
    - Make frame stack of unique_ptr to avoid frame re-allocation and copy during frame.resize.
    - Pass…
jinhongyii pushed a commit to jinhongyii/relax that referenced this pull request Jan 24, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
junrushao pushed a commit to junrushao/relax that referenced this pull request Jan 25, 2023
junrushao pushed a commit to junrushao/relax that referenced this pull request Jan 26, 2023
junrushao pushed a commit to junrushao/relax that referenced this pull request Jan 29, 2023
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 31, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 31, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 31, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 31, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
junrushao pushed a commit to junrushao/relax that referenced this pull request Feb 5, 2023
junrushao pushed a commit to junrushao/relax that referenced this pull request Feb 6, 2023
vinx13 pushed a commit to vinx13/relax that referenced this pull request Feb 8, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
vinx13 pushed a commit to vinx13/relax that referenced this pull request Feb 8, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Feb 12, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Feb 12, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Feb 12, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Feb 12, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
vinx13 pushed a commit to vinx13/relax that referenced this pull request Feb 13, 2023
This PR migrates mlc-ai/relax#46 to new struct
info infra, as part of our AD migration.

Because we need do numerical testing for gradients, this PR depends on
the operator legalizer mlc-ai/relax#96. Also
because the original version of legalizer did not handle the negative
indexing case of `relax.mean`, this PR fixes it.

To lower `collapse_sum_to`, `collapse_sum_like` properly, this PR
migrates a previous patch mlc-ai/relax#43 which
introduces `collapse_sum` in topi. Now we can remove the skip marker in
the legalizer test for `collapse_sum_to` and `collapse_sum_like`.

The gradients of `cross_entropy` and `softmax_cross_entropy` are
removed. And the former will be added back and adjust to new
`cross_entropy` introduced in mlc-ai/relax#96.

Further plan in this PR:
- [x] Add gradients for `log_softmax` and `nll_loss` once
mlc-ai/relax#94 is merged.
- [x] Gradients for some tuple related operators such as `split` and
`concat`. It can help us to test the correctness of AD when there are
Tuple-I/O operators.
- (Not in this PR) "Undefined Gradient" representation. As we know, the
gradients of some operators w.r.t. specified inputs are undefined or
meaningless, such as the partial gradient of `indices` in `take(x,
indices)`. Relay directly uses `zeros_like` in this case as it won't
affect gradient propagation. Another choice is to introduce a dummy Expr
named `UndefinedGradient` to represent it. How do we handle this case in
relax?
vinx13 pushed a commit to vinx13/relax that referenced this pull request Feb 13, 2023
This is the PR following tlc-pack#55 after source branch moved to personal repo.

This PR is based on tlc-pack#98.

This PR adds the new automatic differentiation API:
- `Gradient(func: GlobalVar, require_grads: Optional[Union[Var,
List[Var]]] = None) -> tvm.ir.transform.Pass`
- transforms the given funcion in the IRModule, and adds a new function
that calculates the gradient with regard to the function's output

Now Gradient only supports differentiating a function in the IRModule
with one dataflow block with respect to the only return value of the
function, which needs to be scalar.

This PR writes two files for unit test:
- `tests/python/relax/test_transform_gradient.py` only contains
`assert_structural_equal` assertions.
- `tests/python/relax/test_transform_gradient_numeric.py` contains
numeric checks, including manually derived gradients and the numerical
differentiation method `check_numerical_grads`.

Checkpoints:
- [x] Refactor to use CopyWithNewParams and ExprFunctor
- [x] Check int64/int32 tensors should not be differentiated (now only
check in params)
- [x] Rebase & migrate to StructInfo
- [x] Refactor about Tuple
- [x] Refactor about NestedMsg
- [x] Support ops taking in tuple or returning tuple
- [x] Eliminating collapse_sum_to (done in tlc-pack#98)

Future:
- (Not in this PR) Handle undefined gradient in add and return value
	- Now we handle them as zeros

Co-authored-by: SiriusNEO <1713833595@qq.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants