-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Test TensorTypeSet instead of autograd_meta_ for variable-ness. #28543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: a551928 Pull Request resolved: #28543
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 97315a3 Pull Request resolved: #28543
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
c10/core/TensorImpl.h
Outdated
// of std::default_delete<AutogradMeta> which won't work as AutogradMeta | ||
// is incomplete at this point. But the defaulting to nullptr is also | ||
// pointless because unique_ptr has a default constructor which will do the | ||
// right thing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the NB:
part of the comment meant for another PR? Because it says We CANNOT default this to nullptr
but we still do in this PR :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's wrong. I just fixed it
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157) [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157) [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157) [ghstack-poisoned]
…ness." By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented in the right place!), these are equivalent. But when I introduce null autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is going to give me the right information no matter what. In the long run, this patch will be a wash, because everything will "be a variable" in the long term. But I am making this change now to make sure that the invariant actually holds. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157) [ghstack-poisoned]
CircleCI build failures summaryAs of commit 49a1e11:
Here are the reasons each build failed. This comment was automatically generated by Dr. CI. Please report bugs/suggestions on the GitHub issue tracker. |
Stack from ghstack:
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent. But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.
In the long run, this patch will be a wash, because everything will "be a variable"
in the long term. But I am making this change now to make sure that the invariant
actually holds.
Signed-off-by: Edward Z. Yang ezyang@fb.com
Differential Revision: D18171157