Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[microNPU] Add NHWC -> NHCWB16 layout transformation pass #9561

Merged
merged 5 commits into from
Dec 3, 2021

Conversation

lhutton1
Copy link
Contributor

@lhutton1 lhutton1 commented Nov 23, 2021

Adds a layout optimization pass that modifies the ifm/ofm layout of an operation to NHCWB16 where possible. This can occur when the producer or consumer of a tensor is also an NPU operator.

Note: this PR is dependent on #9560.

cc @ekalda @manupa-arm @NicolaLancellotti @dchauhan-arm @mbaret

Copy link
Contributor

@ekalda ekalda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good! :) Very nice test coverage, clear and understandable tests and well done for checking all the various graph types! I noticed that none of the tests has a depthwise_conv2d in it, so maybe it is worth inserting a depthwise somewhere into the tested graphs as well to make sure no surprises in the depthwise front?

Adds a layout optimization pass that modifies the ifm/ofm layout
of an operation to NHCWB16 where possible. This can occur when the
producer or consumer of a tensor is also an NPU operator.

Change-Id: I0d6ad1f868dd6f78a236b4bc869e5cbd77c986b0
Change-Id: Iba3436d14a16106a696400c1329dcfa164631dc7
Change-Id: Ibdf1b129e11cb8e09b73cf73817532e6c8e8ee82
Change-Id: Ifc1b4c34cd891fce35d3e043ba81afdd2e34fc4e
Change-Id: I9d586c4931eb3499ed9c6c8555b1ff9c380f402b
@lhutton1
Copy link
Contributor Author

lhutton1 commented Dec 3, 2021

Friendly ping for comment/approval :)

Copy link
Contributor

@manupak manupak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@manupak
Copy link
Contributor

manupak commented Dec 3, 2021

@ekalda ?

Copy link
Contributor

@ekalda ekalda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep LGTM! Thanks @lhutton1! :)

@manupak manupak merged commit 6c8ed60 into apache:main Dec 3, 2021
@manupak
Copy link
Contributor

manupak commented Dec 3, 2021

Thanks! @lhutton1 @ekalda . This is merged now!

ylc pushed a commit to ylc/tvm that referenced this pull request Jan 7, 2022
Adds a layout optimization pass that modifies the ifm/ofm layout
of an operation to NHCWB16 where possible. This can occur when the
producer or consumer of a tensor is also an NPU operator.
yangulei pushed a commit to yangulei/tvm that referenced this pull request Jan 11, 2022
Adds a layout optimization pass that modifies the ifm/ofm layout
of an operation to NHCWB16 where possible. This can occur when the
producer or consumer of a tensor is also an NPU operator.
yangulei pushed a commit to yangulei/tvm that referenced this pull request Jan 12, 2022
Adds a layout optimization pass that modifies the ifm/ofm layout
of an operation to NHCWB16 where possible. This can occur when the
producer or consumer of a tensor is also an NPU operator.
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
Adds a layout optimization pass that modifies the ifm/ofm layout
of an operation to NHCWB16 where possible. This can occur when the
producer or consumer of a tensor is also an NPU operator.
@lhutton1 lhutton1 deleted the layout-optimize-pass-initial branch March 17, 2022 21:27
qsqqsqqsq-intellif pushed a commit to qsqqsqqsq-intellif/tvm that referenced this pull request Apr 29, 2022
Adds a layout optimization pass that modifies the ifm/ofm layout
of an operation to NHCWB16 where possible. This can occur when the
producer or consumer of a tensor is also an NPU operator.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants