Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] Add new device type works on CPU #83817

Open
takeru1205 opened this issue Aug 21, 2022 · 3 comments
Open

[feature request] Add new device type works on CPU #83817

takeru1205 opened this issue Aug 21, 2022 · 3 comments
Labels
enhancement Not as big of a feature, but technically not a bug. Should be easy to fix triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@takeru1205
Copy link

馃殌 The feature, motivation and pitch

When we write code on a CPU machine and it runs on a GPU machine, we sometimes forget to transfer tensor GPU to CPU or opposite.
Because we write device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") on head of code.
When we test it on a CPU machine, all tensors are put on the CPU and do not tell us whether the tensor will transfer as we expect or not.
So, how about adding a new device type that works on a CPU? (like a fake-gpu)

In my assumption,

Now(no GPU machine)

device = torch.device("cuda:0" if torch.cuda.is_available else "cpu")
a = torch.arange(10).to(device)
a.numpy()  # actually need a.cpu() before numpy()
>>> No Error

New Feature(no GPU machine)

device = torch.device("cuda:0" if torch.cuda.is_available else "fake-gpu")
a = torch.arange(10).to(device)
a.numpy() # need a.cpu() before numpy()
>>> Error

Alternatives

No response

Additional context

No response

@ngimel
Copy link
Collaborator

ngimel commented Aug 24, 2022

This seems to address a niche scenario at the cost of removing clarity of where each tensor is. Pytorch usually opts to not doing things behind user back.

@ngimel ngimel added enhancement Not as big of a feature, but technically not a bug. Should be easy to fix triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Aug 24, 2022
@peng-1998
Copy link

I have also been hoping to have a device that looks like a gpu but is actually a cpu to facilitate debugging errors caused by different devices. Is it possible to design a flag so that when the flag is True, the cuda operation actually runs on the cpu and does not really need the gpu?

@ikamensh
Copy link
Contributor

This is not a niche scenario. Everyone except experimenting novices should test their code, and running tests on GPU machine is costly. Now if your code targets GPU, but is tested on CPU, you will miss many problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Not as big of a feature, but technically not a bug. Should be easy to fix triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants