Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

32b dim_t fixes #3990

wants to merge 1 commit into from

32b dim_t fixes #3990

wants to merge 1 commit into from


Copy link

pjaaskel commented Jan 8, 2020

These are needed to fix tests with 32b dim_t.

Also sneak in a fix for some whitespaces of a .py script the " fix" keeps modifying (at least with clang-format-7).

Test Plan:

32b and 64b dim_t via pocl CPU driver (LLVM 8).

These are needed to fix tests with 32b dim_t. Also fix the whitespaces
of a .py script " fix" keeps modifying.
Copy link

opti-mix left a comment

@pjaaskel I'm not sure that replacing ElemKind::Int64ITy by sdim_t in the SparseLangthsSum context is valid. IIRC, these operations assume very specific fixed-sized payloads.

cc @jfix71 Is it OK to to this?

jfix71 approved these changes Jan 9, 2020
Copy link

jfix71 left a comment

This is fine by me -- we are already downconverting int64 indices to int32 anyway for the NNPI and Habana backends. Additionally in the long run the SLS ops should/will move to use int32 indices, so backends may claim support for both for now.

Copy link

facebook-github-bot left a comment

@jfix71 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.


This comment has been minimized.

Copy link

facebook-github-bot commented Jan 9, 2020

@jfix71 merged this pull request in bfb251d.

@pjaaskel pjaaskel deleted the parmance:32b_dim_t_fixes branch Jan 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
4 participants
You can’t perform that action at this time.