Skip to content

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Oct 23, 2019

Stack from ghstack:

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent. But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term. But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang ezyang@fb.com

Differential Revision: D18171157

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Oct 23, 2019
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: a551928
Pull Request resolved: #28543
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Oct 23, 2019
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: 97315a3
Pull Request resolved: #28543
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
// of std::default_delete<AutogradMeta> which won't work as AutogradMeta
// is incomplete at this point. But the defaulting to nullptr is also
// pointless because unique_ptr has a default constructor which will do the
// right thing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the NB: part of the comment meant for another PR? Because it says We CANNOT default this to nullptr but we still do in this PR :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it's wrong. I just fixed it

…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157)

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157)

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157)

[ghstack-poisoned]
…ness."

By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent.  But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.

In the long run, this patch will be a wash, because everything will "be a variable"
in the long term.  But I am making this change now to make sure that the invariant
actually holds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D18171157](https://our.internmc.facebook.com/intern/diff/D18171157)

[ghstack-poisoned]
@kostmo
Copy link
Member

kostmo commented Oct 31, 2019

CircleCI build failures summary

As of commit 49a1e11:

  • 0/1 flaky

Here are the reasons each build failed.


This comment was automatically generated by Dr. CI.
Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in a844809.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants