-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Implement tensor.size(int) for BatchedTensor #40028
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` [ghstack-poisoned]
We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 69ec2ca (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
❄️ 3 failures tentatively classified as flakybut reruns have not yet been triggered to confirm:
|
We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` [ghstack-poisoned]
We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` [ghstack-poisoned]
We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` [ghstack-poisoned]
Summary: Pull Request resolved: pytorch#40028 We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` Differential Revision: D22070655 Pulled By: zou3519 fbshipit-source-id: 18530579ad41f3c4f96589da41eb24a46caf7af9
Stack from ghstack:
vector<int64_t>
#40042 Change VmapTransforms to use SmallVector instead ofvector<int64_t>
We have this call native::size directly. Some alternatives I considered
were:
doing now.
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...
Test Plan:
./build/bin/vmap_test
Differential Revision: D22070655