-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
simple and small optimization: default device argument for custom device #103828
Comments
cc @ezyang who was discussing this on the proposed PRs |
There is already |
yes, the
But we want to add an API to set the device argument of some operators which is |
As I said, I have proposed a context manager similar to the torch.device context manager, but which ONLY applies to things like |
@ezyang yeah, thankyou so much, and we have made some tests with
So now we do not know how to solve the API |
Is it just torch.device? We can make torch.device interposable by TorchFunctionMode, would that be sufficient? |
ehha, except for |
And I have tried to use |
You need to modify it, but the modification is a lot smaller.
change it to something like
and then you should be able to interpose on torch.device constructions. Then, based on your other patch, you just need to modify is_pinned (should already be interposable) and fork_rng (do something similar, but use the Python side torch function handling idiom
) |
yeah, I give a wrong string, and I fix it now. this is PR #106017
the result is :
And in python as you give a storage example, we may need to add |
Are you more comfortable with the overhead if you assume you're going to torch.compile the model anyway? |
yeah, we also want to support training model in eager mode and torch.compile mode. @ezyang |
馃殌 The feature, motivation and pitch
1銆丗or many operators(such as pin_memory), the device argument is default as
cuda
if not given; but for other device, we must have to give extra argument device_type comparing to cuda, so we add an API to set the default argument device just once at the begining to keep usage consistent with cuda.2銆丄nd there are some API defined in Python, we add a argument named device_type and the default value is cuda, (such as https://github.com/pytorch/pytorch/blob/main/torch/random.py#L104), so that we could support more device (privateuse1 device).
So we want to add an API to set the default argument device just once at the begining to keep usage consistent with cuda, and add an api to get the default device to keep usage consistent with cuda if not gived device_type.
Alternatives
No response
Additional context
No response
cc @albanD
The text was updated successfully, but these errors were encountered: