Skip to content

[relay][pass] add layerNormal infer layout#11784

Merged
masahi merged 1 commit intoapache:mainfrom
chengven027:layerNormal.add.infer
Jun 21, 2022
Merged

[relay][pass] add layerNormal infer layout#11784
masahi merged 1 commit intoapache:mainfrom
chengven027:layerNormal.add.infer

Conversation

@chengven027
Copy link
Contributor

The structure before adding is

fn (%x: Tensor[(1, 56, 56, 64), float32] /* ty=Tensor[(1, 56, 56, 64), float32] */, %weight: Tensor[(3, 3, 64, 64), float32] /* ty=Tensor[(3, 3, 64, 64), float32] */, %gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */) -> Tensor[(1, 56, 56, 64), float32] {
  %0 = layout_transform(%x, src_layout="NHWC", dst_layout="NCHW") /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %1 = layout_transform(%weight, src_layout="HWIO", dst_layout="OIHW") /* ty=Tensor[(64, 64, 3, 3), float32] */;
  %2 = nn.conv2d(%0, %1, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %3 = layout_transform(%2, src_layout="NCHW", dst_layout="NHWC") /* ty=Tensor[(1, 56, 56, 64), float32] */;
  nn.layer_norm(%3, %gamma, %beta, axis=3) /* ty=Tensor[(1, 56, 56, 64), float32] */
} /* ty=fn (Tensor[(1, 56, 56, 64), float32], Tensor[(3, 3, 64, 64), float32], Tensor[(64), float32], Tensor[(64), float32]) -> Tensor[(1, 56, 56, 64), float32] */

The structure after adding is

fn (%x: Tensor[(1, 56, 56, 64), float32] /* ty=Tensor[(1, 56, 56, 64), float32] */, %weight: Tensor[(3, 3, 64, 64), float32] /* ty=Tensor[(3, 3, 64, 64), float32] */, %gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */) -> Tensor[(1, 56, 56, 64), float32] {
  %0 = layout_transform(%x, src_layout="NHWC", dst_layout="NCHW") /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %1 = layout_transform(%weight, src_layout="HWIO", dst_layout="OIHW") /* ty=Tensor[(64, 64, 3, 3), float32] */;
  %2 = nn.conv2d(%0, %1, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %3 = nn.layer_norm(%2, %gamma, %beta, axis=1) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  layout_transform(%3, src_layout="NCHW", dst_layout="NHWC") /* ty=Tensor[(1, 56, 56, 64), float32] */
} /* ty=fn (Tensor[(1, 56, 56, 64), float32], Tensor[(3, 3, 64, 64), float32], Tensor[(64), float32], Tensor[(64), float32]) -> Tensor[(1, 56, 56, 64), float32] */

@masahi masahi merged commit 9ff2a5e into apache:main Jun 21, 2022
@chengven027 chengven027 deleted the layerNormal.add.infer branch June 24, 2022 07:47
blackkker pushed a commit to blackkker/tvm that referenced this pull request Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants