-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Update bfloat16 support for Intel CPUs #2209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update bfloat16 support for Intel CPUs #2209
Conversation
PreviewPreview and run these notebook edits with Google Colab: Rendered notebook diffs available on ReviewNB.com.Format and styleUse the TensorFlow docs notebook tools to format for consistent source diffs and lint for style:$ python3 -m pip install -U --user git+https://github.com/tensorflow/docsIf commits are added to the pull request, synchronize your local branch: git pull origin gaurides/mixedprecision_doc
|
@penpornk : Can you please review this PR to update documentation? |
@MarkDaoust Could you please help review this mixed precision doc update? :) |
Thanks for the ping. I think its okay, let's get this merged, it's been waiting long enough. |
include suggested changes Co-authored-by: Mark Daoust <markdaoust@google.com>
@MarkDaoust - I included your suggestion. Can you please re-approve? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
…n_doc PiperOrigin-RevId: 548834905
This PR updates mixed_precision documentation to reflect that Intel CPU Sapphire Rapids supports BFloat16 optimizations using AMX instructions and user can expect performance boost.
It doesn't affect any other part of the document.