Skip to content

Fix BFloat16 tensor conversion with newer PyTorch#4

Open
sheikhlimon wants to merge 1 commit intorhel-lightspeed:mainfrom
sheikhlimon:fix/bfloat16-numpy-conversion
Open

Fix BFloat16 tensor conversion with newer PyTorch#4
sheikhlimon wants to merge 1 commit intorhel-lightspeed:mainfrom
sheikhlimon:fix/bfloat16-numpy-conversion

Conversation

@sheikhlimon
Copy link
Copy Markdown

embeddings.cpu().numpy() throws TypeError: Got unsupported ScalarType BFloat16 with PyTorch >= 2.11, which outputs BFloat16 tensors by default for the Granite embedding model.

@sheikhlimon sheikhlimon requested a review from a team as a code owner April 30, 2026 13:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant