Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

.numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU label:bug #5652

Closed
lizardintelligence opened this issue Jun 2, 2022 · 1 comment · Fixed by #5656
Labels

Comments

@lizardintelligence
Copy link

I believe embeddings.numpy() on line 45 and 63 of saliency_interpreter.py should be embeddings.detach().cpu().numpy()
Otherwise while running the interpretation on GPU has a error 'TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.'

@lizardintelligence lizardintelligence changed the title .numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU label:bug .numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU type:bug Jun 2, 2022
@lizardintelligence lizardintelligence changed the title .numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU type:bug .numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU bug Jun 2, 2022
@lizardintelligence lizardintelligence changed the title .numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU bug .numpy() command in _aggregate_token_embeddings isn't detaching from GPU into CPU label:bug Jun 2, 2022
@github-actions
Copy link

This issue is being closed due to lack of activity. If you think it still needs to be addressed, please comment on this thread 👇

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant