Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TSL] Bump ml_dtypes to version 0.5.0 #17075

Closed
wants to merge 1 commit into from

Conversation

apivovarov
Copy link
Contributor

@apivovarov apivovarov commented Sep 11, 2024

ml_dtypes 0.5.0. updates:

  • Added new 8-bit float types following IEEE 754 convention: ml_dtypes.float8_e4m3, ml_dtypes.float8_e3m4
  • Added the 8-bit floating point type ml_dtypes.float8_e8m0fnu, which is the OpenCompute MX scale format.
  • Added new 4-bit and 6-bit float types: ml_dtypes.float4_e2m1fn, ml_dtypes.float6_e2m3fn and ml_dtypes.float6_e3m2fn.
  • Fix outputs of float divmod and floor_divide when denominator is zero.

ml_dtypes/releases

Related PRs

@apivovarov apivovarov changed the title [TSL] Bump ml_dtypes. Add float8_e4m3, float_e3m4 [TSL] Bump ml_dtypes. Add float8_e4m3, float8_e3m4 Sep 11, 2024
Copy link
Member

@ddunl ddunl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I'll patch today/tomorrow. Thanks!

@apivovarov
Copy link
Contributor Author

Hi David, just a friendly reminder about this PR when you get a chance. Thanks! @ddunl

copybara-service bot pushed a commit to google/tsl that referenced this pull request Sep 13, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit that referenced this pull request Sep 13, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes #17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit to tensorflow/tensorflow that referenced this pull request Sep 13, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit to google/tsl that referenced this pull request Sep 13, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit that referenced this pull request Sep 13, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes #17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit to tensorflow/tensorflow that referenced this pull request Sep 13, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 674396944
@ddunl
Copy link
Member

ddunl commented Sep 13, 2024

Thanks for the reminder! I had to make a slight change to the build file as well, but it's out for review internally now. So hopefully will be merged today or Monday.

@apivovarov
Copy link
Contributor Author

ml_dtypes 0.5.0 was released on Sep 13, 2024

    ML_DTYPES_COMMIT = "f802a63d5ef65e33978eece14464c8b02a7c269d"
    ML_DTYPES_SHA256 = "f789d68472cf02f548f0881255438708ed734f1ffcd6c64cd3e9ffe23478a9c5"

Should I update this PR? or open new one? @ddunl

@ddunl
Copy link
Member

ddunl commented Sep 16, 2024

Updating this one is fine. I'm realizing too that the internal change should probably update TF's as well, so I haven't submitted it yet. But moving to 0.5 sounds good to me!

@apivovarov
Copy link
Contributor Author

Updating this one is fine. I'm realizing too that the internal change should probably update TF's as well, so I haven't submitted it yet. But moving to 0.5 sounds good to me!

Will it be possible to update TF ml_dtypes dependency requirements to 0.5.0?
Currently it says ml_dtypes >= 0.4.0, < 0.5.0 in the following TF files.

tensorflow/tools/pip_package/setup.py
ci/official/requirements_updater/requirements.in
ci/official/requirements_updater/requirements_numpy2/requirements_np2.in

@ddunl
Copy link
Member

ddunl commented Sep 16, 2024

Oh I see. Yeah I'll have to test this first. If it's troublesome to upgrade to 0.5.0 I'll just revert back to the old version if that works for you

@apivovarov apivovarov changed the title [TSL] Bump ml_dtypes. Add float8_e4m3, float8_e3m4 [TSL] Bump ml_dtypes to version 0.5.0 Sep 16, 2024
@apivovarov apivovarov force-pushed the ml_dtypes_Sep11 branch 2 times, most recently from 54bb39a to df711b4 Compare September 16, 2024 21:51
- Added new 8-bit float types following IEEE 754 convention: ml_dtypes.float8_e4m3, ml_dtypes.float8_e3m4
- Added the 8-bit floating point type ml_dtypes.float8_e8m0fnu, which is the OpenCompute MX scale format.
- Added new 4-bit and 6-bit float types: ml_dtypes.float4_e2m1fn, ml_dtypes.float6_e2m3fn and ml_dtypes.float6_e3m2fn.
- Fix outputs of float divmod and floor_divide when denominator is zero.
@apivovarov
Copy link
Contributor Author

Oh I see. Yeah I'll have to test this first. If it's troublesome to upgrade to 0.5.0 I'll just revert back to the old version if that works for you

That certainly works. Thanks, David!

copybara-service bot pushed a commit to google/tsl that referenced this pull request Sep 17, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit that referenced this pull request Sep 17, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes #17075

PiperOrigin-RevId: 674396944
copybara-service bot pushed a commit to google/tsl that referenced this pull request Sep 17, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 675687080
copybara-service bot pushed a commit to tensorflow/tensorflow that referenced this pull request Sep 17, 2024
ml_dtypes Updates:
Add float8_e4m3 and float8_e3m4 types support
Fix float divmod with zero denominator
Add int2 and uint2 types
ml_dtypes/commits

Related PRs
ml_dtypes PR Add float8_e4m3 jax-ml/ml_dtypes#161 Add float8_e4m3 (Merged)
XLA PR Add support for float8_e4m3 #16585 (In Review)

This closes openxla/xla#17075

PiperOrigin-RevId: 675687080
@apivovarov
Copy link
Contributor Author

Seems that the bot merged ver1 of this PR. (ml_dtypes Sep 9 commit). I opened new PR #17230 to bump to 0.5.0 (ml_dtypes Sep 13 commit)

@ddunl
Copy link
Member

ddunl commented Sep 18, 2024

Oh sorry, this was intentional on my part. there's more testing required to bump the TF commit as well, though I'll look into it if it's helpful (I maybe wrongly concluded it made no difference reading back the discussion on this PR)

@apivovarov
Copy link
Contributor Author

Agree, for float8_e4m3 add float8_e3m4 work (#16585) it makes no difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants