Skip to content

[Unity][UX] Symbolic Variables Used in Multiple Functions#14606

Merged
MasterJH5574 merged 5 commits intoapache:unityfrom
SiriusNEO:unity-dev/2023-04-10-tir-var-in-multi-func
Apr 13, 2023
Merged

[Unity][UX] Symbolic Variables Used in Multiple Functions#14606
MasterJH5574 merged 5 commits intoapache:unityfrom
SiriusNEO:unity-dev/2023-04-10-tir-var-in-multi-func

Conversation

@SiriusNEO
Copy link
Contributor

@SiriusNEO SiriusNEO commented Apr 12, 2023

Prior to this PR, there is no constraint to prevent user defining multiple functions which may use the same symbolic TIR var. For example, user may write the following script:

batch_size = tir.Var("batch_size", "int64")

@I.ir_module
class Test:
    @R.function
    def main(
        x: R.Tensor((batch_size, 10), "float32"),
    ):
        with R.dataflow():
            lv = R.sum(x, axis=1)
            lv1 = R.mean(x, axis=1)
            out = R.add(lv, lv1)
            R.output(out)
        return out

    @R.function
    def main1(
        x: R.Tensor((batch_size, 10), "float32"),
        y: R.Tensor((batch_size, 10), "float32"),
    ):
        with R.dataflow():
            out = R.subtract(x, y)
            R.output(out)
        return out

lowered_mod = LegalizeOps()(Test)

ex = relax.build(lowered_mod, "llvm")
vm = relax.VirtualMachine(ex, tvm.cpu(0))

But this script can not be built because VMShapeLower can not handle this case well.
Since it is a illegal behaviour, we need to prevent user from doing this. Specifically, this PR contains two parts of work:

  • Let well form checker to check this case.
  • Let CopyWithNewVars util copies the symbolic vars in the struct info inside the function.

@tvm-bot
Copy link
Collaborator

tvm-bot commented Apr 12, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@SiriusNEO
Copy link
Contributor Author

For the CopyWithNewVars part, cc @Ubospica @yelite

@yongwww
Copy link
Member

yongwww commented Apr 12, 2023

Thanks @SiriusNEO for the effort. I am curious about the error in VMShapeLower for this case, could you elaborate it a bit?

@SiriusNEO
Copy link
Contributor Author

@yongwww Sure! For the example above, if you run it you will get error in this line:

ex = relax.build(lowered_mod, "llvm")

And the error msg is:

        at /home/sirius/catalyst/mlc-ai/src/relax/backend/vm/vm_shape_lower.cc:366
  File "/home/sirius/catalyst/mlc-ai/src/relax/backend/vm/vm_shape_lower.cc", line 366
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (slot->value_computed) is false: PrimExpr T.int64(4) * batch_size * T.int64(10) has not been computed

But if you remove a function, either main or main1 from the module Test, it goes well.
Another trial is that I clear slot_vec_ and slot_map_ in the beginning of Function Rewrite(GlobalVar gvar, Function func) in the vm shape lower, and then it also works for this example.
So after digging it a bit, I found the problem lies in the dup tir var in multiple functions (For more details I can't figure it out completely, since I'm not quite familiar with the logic of VMShapeLower). And after communicating with TQ, this behaviour (Use the same TIR var in multiple functions) is considered illegal. So I send this PR.

@yongwww
Copy link
Member

yongwww commented Apr 13, 2023

@SiriusNEO Gotcha, thanks!

@Hzfengsy
Copy link
Member

cc @LeshengJin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants