Skip to content

Add parity test for LayerNormalization#8622

Merged
tianleiwu merged 4 commits intomasterfrom
tlwu/test_parity_layernorm
Aug 5, 2021
Merged

Add parity test for LayerNormalization#8622
tianleiwu merged 4 commits intomasterfrom
tlwu/test_parity_layernorm

Conversation

@tianleiwu
Copy link
Copy Markdown
Contributor

@tianleiwu tianleiwu commented Aug 5, 2021

Description:
Add test to verify precision of LayerNormalization.

Example output (result could be different since input is random):

Device I/O Precision LayerNorm Input Precision MaxDiff Comment
CPU FP32 FP32 3.33e-06
CUDA FP32 FP32 9.54e-07
CUDA FP16 FP32 9.8e-04 Cast(to=FP32) -> LayerNorm(FP32) --> Cast(to=FP16)
CUDA FP16 FP16 3.9e-03

Motivation and Context

  • Why is this change required? What problem does it solve?
  • If it fixes an open issue, please link to the issue here.

@tianleiwu tianleiwu requested review from viboga and wangyems August 5, 2021 00:23
@tianleiwu tianleiwu requested a review from a team as a code owner August 5, 2021 00:23
@tianleiwu tianleiwu merged commit 24b14c6 into master Aug 5, 2021
@tianleiwu tianleiwu deleted the tlwu/test_parity_layernorm branch August 5, 2021 17:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants