-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Open
Labels
proposalThis issue suggests modifications. If it also has the "accepted" label then it is planned.This issue suggests modifications. If it also has the "accepted" label then it is planned.
Milestone
Description
BFLOAT16 is a new floating-point format. It's a 16-bit floating point format with an 8 bit exponent and 7 bit mantissa (vs 5 bit exponent, 11 bit mantissa of a half-precision float which is currently f16) designed for deep learning.
The bfloat16 format is utilized in upcoming Intel AI processors, such as Nervana NNP-L1000, Xeon processors, and Intel FPGAs, Google Cloud TPUs, and TensorFlow. Arm Neon and SVE also supports bfloat16 format.
Selected excerpts:
- Rust proposal is to call the type
f16b. -
should always have size 2 and alignment 2 on all platforms
References:
- Wikipedia Article
- A Transprecision Floating-Point Platform for Ultra-Low Power Computing
- Rust PR
- LLVM MR for some x86 intrinsics
- GCC 10 Adds ARMv8.6-A Targeting, BFloat16 + i8MM Options
As a more general issue: how should we add new numeric types going forward? e.g. Unum. With zig not supporting operator overloading, such types would have to be provided by the core for ergonomic use.
vladfaust, artob, sharno, msingle, nico-abram and 7 more
Metadata
Metadata
Assignees
Labels
proposalThis issue suggests modifications. If it also has the "accepted" label then it is planned.This issue suggests modifications. If it also has the "accepted" label then it is planned.