Skip to content
This repository has been archived by the owner on May 22, 2023. It is now read-only.

Relay->Relax translator (ResNet example) #75

Merged
merged 9 commits into from Feb 14, 2022
Merged

Relay->Relax translator (ResNet example) #75

merged 9 commits into from Feb 14, 2022

Conversation

YuchenJin
Copy link
Collaborator

This PR implements a Relay to Relax translator, which allows us to import Relay workloads to Relax for benchmarking and development purposes:

  • Support converting the most commonly used operators to Relax through EmitTE in relay_translator.py . @yongwww will add more ops and implement a LSTM demo in the future.
  • A ResNet demo by translating the Relay ResNet model is in apps/relax_examples/resnet.py.

cc: @yongwww @hypercubestart @ZihengJiang

class ConstantNode : public ExprNode {
public:
/*! \brief The data of the tensor */
runtime::NDArray data;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use relay::ConstantNode?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I added relax::ConstantNode was that when we meet a ConstantNode in relay in the translation, we need to fill its shape_ field so that it can be converted to a te tensor. The constructor of relax::ConstantNode fills the shape_ field here: https://github.com/tlc-pack/relax/blob/relax-resnet/src/relax/ir/expr.cc#L61.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change the original constructor to make shape as an optional field?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I reuse the relay::ConstantNode and fill its shape and checked_type during construction.

if attrs is not None:
shape = attrs["shape"]
dtype = attrs["dtype"]
return nn.emit_te(topi.full, shape, dtype, 0.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still feel it is better to make those Op conversion more automatic. We should be able to get the op's corresponding TE function and attributes according to the Op object.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion! Yes, I totally agree it'd be great to make the Op conversions more generic and automatic.

I couldn't find a good API to get an op's TE function in the TVM codebase (op.get_attr("FTVMCompute") returns a PackedFunc), do you happen to have a pointer? And the other question: is Op and TE function a one-to-one mapping? For example, nn.conv2d maps to multiple TE functions/strategies depending on the layout.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another inconvenient and cumbersome part when I implemented the converter was some topi functions take positional arguments (for example one_hot), but some of these positional arguments are stored in the attribute (which is a dict). So I had to unpack them from the attr and insert them like the shape and dtype here. I think we can write a signature inspector to automate this part in the future if we really want it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to take a look at how the current lowering flow works in Relay, since I think it also need to do this job: according to the op and target, get the schedule out and do the leftover lowering. I can also do some check this week.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if the PR is needed urgently, we can merge it now

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries, let me try something and we can (probably) merge it tomorrow. :)

Copy link
Collaborator

@yongwww yongwww Feb 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

automatic relay->topi mapping looks helpful for many cases, but there are still lots of models that are not able to be supported even we have the automatic op mapping. For example, almost all object detection/segmentation models like Mark-RCNN are composed with Relay ADT stuffs like TensorArray, which is not straightforward to be converted to relax. Probably we will need the converter from original DL framework to relax in the end (more discussion is needed)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we definitely need the converter builtin relax eventually. Otherwise, the expressiveness cannot beyond of the relay's scope. The translator here can be used as a temporary solution for now when we don't have the optimization passes yet.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the translator to leverage FTVMCompute for quite a lot of Ops' translations, which saves a lot of code 😎. Please take a look. Thanks! :)

Copy link
Collaborator

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall LGTM!

cc @ZihengJiang @hypercubestart could you please take another look when you have some cycles?

Copy link
Collaborator

@hypercubestart hypercubestart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! thanks for the hard work!

relax_mod = relay_translator.from_relay(relay_mod["main"])

# print the ResNet IRmodule got translated
print(R.parser.astext(relax_mod))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might not need this print? and how about moving this file to tests/python/relax/

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to put it into apps/relax_examples because it's a demo to translate the Relay ResNet workload to Relax. And I put the translator in python/tvm/relax/testing/ because the relay->relax translator is a current workaround to allow us to construct real workloads quickly. We will need pytorch/tf/onnx importers once we have high-level Ops on Relax.

src/printer/relax_script_printer.cc Outdated Show resolved Hide resolved
python/tvm/relax/testing/topi.py Outdated Show resolved Hide resolved
include/tvm/relax/expr.h Outdated Show resolved Hide resolved
include/tvm/relax/topi.h Outdated Show resolved Hide resolved
include/tvm/relax/topi.h Outdated Show resolved Hide resolved
python/tvm/relax/testing/relay_translator.py Outdated Show resolved Hide resolved
@Robslhc
Copy link
Contributor

Robslhc commented Feb 11, 2022

Hi @YuchenJin ,
I have a question about this translator. It seems that all relay ops are directly transformed into PrimFuncs. Then how should graph optimizations be performed? For example, we can see the TIR function of batch_norm, which is optimized originally by SimplifyInference Pass in Relay. Also, we can not see any fused ops in the translated IR. I guess it is not easy to perform such graph optimizations on this result, because we do not have the concept of "op". What's your idea about this?

@YuchenJin
Copy link
Collaborator Author

Hi @YuchenJin , I have a question about this translator. It seems that all relay ops are directly transformed into PrimFuncs. Then how should graph optimizations be performed? For example, we can see the TIR function of batch_norm, which is optimized originally by SimplifyInference Pass in Relay. Also, we can not see any fused ops in the translated IR. I guess it is not easy to perform such graph optimizations on this result, because we do not have the concept of "op". What's your idea about this?

Hi @Robslhc, glad to see you here, it's a great question! :)

In last week's Relax open dev meeting, @junrushao1994 had a good idea: we can try translating the after-fused Relay program (program after FuseOps pass) to Relax directly, so we don't have to write a Relax fusion pass for now to achieve comparable performance with Relay. We can enhance this translator later to add this feature.

include/tvm/relax/topi.h Outdated Show resolved Hide resolved
src/relax/topi/topi.cc Outdated Show resolved Hide resolved
@yongwww yongwww merged commit 27c1ea2 into relax Feb 14, 2022
@yongwww
Copy link
Collaborator

yongwww commented Feb 14, 2022

Thanks @YuchenJin for the hard work!!! thanks @ZihengJiang @tqchen @hypercubestart @Robslhc @junrushao1994 for the review/discussion!

@YuchenJin YuchenJin deleted the relax-resnet branch February 14, 2022 17:47
Copy link
Contributor

@ZihengJiang ZihengJiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work. Just a minor question

@@ -215,6 +216,12 @@ Doc RelaxScriptPrinter::VisitNode_(const relay::TupleGetItemNode* op) {
return doc;
}

Doc RelaxScriptPrinter::VisitNode_(const relay::ConstantNode* op) {
Doc doc;
doc << "meta[relax.Constant][" << constant_counter_++ << "]";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this emit different counter number for the same constant?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch! i think it might emit different counter number. Actually this block is more like a todo mark, it will be implemented and tested in #76.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Ziheng for the good catch! You are indeed right. Yong will add the "real" metadata section support in his PR. :)

YuchenJin added a commit that referenced this pull request Mar 2, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
yongwww pushed a commit to yongwww/relax that referenced this pull request Mar 14, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
YuchenJin added a commit that referenced this pull request Jun 1, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
jinhongyii pushed a commit to jinhongyii/relax that referenced this pull request Jun 5, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
yongwww pushed a commit to yongwww/relax that referenced this pull request Jun 12, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
yongwww pushed a commit to yongwww/relax that referenced this pull request Jul 15, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
Hzfengsy pushed a commit to Hzfengsy/relax that referenced this pull request Aug 5, 2022
* Squashed

minor cases (tlc-pack#74)

* squashed

* minor cases

* minor cases
`T.void`

refactor `bind_value`

apply code review suggestions

add `T.Let`

add `node` in `bind_value` for error reporting

* update annotation
yongwww pushed a commit that referenced this pull request Aug 14, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit that referenced this pull request Oct 14, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
YuchenJin added a commit that referenced this pull request Nov 18, 2022
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Jan 1, 2023
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 9, 2023
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Jan 13, 2023
YuchenJin added a commit that referenced this pull request Jan 13, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
YuchenJin added a commit that referenced this pull request Jan 14, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Jan 16, 2023
junrushao pushed a commit to junrushao/relax that referenced this pull request Jan 25, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit to junrushao/relax that referenced this pull request Jan 26, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit to junrushao/relax that referenced this pull request Jan 29, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 31, 2023
vinx13 pushed a commit to vinx13/relax that referenced this pull request Jan 31, 2023
junrushao pushed a commit to junrushao/relax that referenced this pull request Feb 5, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit to junrushao/relax that referenced this pull request Feb 6, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
vinx13 pushed a commit to vinx13/relax that referenced this pull request Feb 8, 2023
junrushao pushed a commit that referenced this pull request Feb 8, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit that referenced this pull request Feb 8, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit that referenced this pull request Feb 8, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit that referenced this pull request Feb 8, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit that referenced this pull request Feb 9, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
junrushao pushed a commit that referenced this pull request Feb 9, 2023
* Relay translator; build static bert forward/backward.

* rebase

* Add ops.

* resnet demo

* cleanup code.

* Rebase.

* Address comments.

* leverage FTVMCompute for most op translation; reuse relay.Constant.

* lint.
MasterJH5574 pushed a commit to MasterJH5574/tlc-relax that referenced this pull request Feb 12, 2023
vinx13 pushed a commit to vinx13/relax that referenced this pull request Feb 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants