Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft of Zfh extension for IEEE 754 binary16 support #496

Merged
merged 8 commits into from
Jul 28, 2021
Merged

Conversation

aswaterman
Copy link
Member

This proposal follows the template of the existing FP extensions. Future extensions might also provide widening ops, but I think that's out of scope for the baseline.

@aswaterman
Copy link
Member Author

See riscv/riscv-v-spec#349

@kito-cheng
Copy link
Member

Any plan for merge this? I think it's just 0.1 that mean we still have chance to change anything? otherwise it's like a hidden/secret extension in branch.

When I said there is already have a spec draft for zfh/fp16 and then people told me they don't seem the spec, that happened several times, and then I need to point out this is not exist in master branch yet, you need to find it out in zfh branch :P

@aswaterman
Copy link
Member Author

I would love to merge it, but IIRC the plan was to wait until @kasanovic wrote an email proposing it.

@kito-cheng
Copy link
Member

Oh, OK, thanks for clarification :)

@vowstar
Copy link

vowstar commented Aug 30, 2020

I really hope this PR could merge and freeze soon, this is a very useful extension.
I wonder if there is a corresponding extension for bfp16?
Could you just copy this extension to Brain floating-point format?

src/zfh.tex Outdated
half-precision operands to single-precision, performing the operation
using single-precision arithmetic, then converting back to half-precision.
Performing half-precision fused multiply-addition using this method incurs
a 1-ulp error on some inputs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest clarifying that the "error" is only on RNE and RMM (double-rounding is innocuous for directed rounding).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@knightsifive Yeah, good point.

aswaterman and others added 8 commits July 28, 2021 14:39
Signed-off-by: Chih-Min Chao <chihmin.chao@sifive.com>

Co-authored-by: Chih-Min Chao <cmchao@gmail.com>
Signed-off-by: Chih-Min Chao <cmchao@gmail.com>

Co-authored-by: Chih-Min Chao <cmchao@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants