Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Const-correctness #27

Open
ezyang opened this issue Jul 20, 2017 · 7 comments
Open

Const-correctness #27

ezyang opened this issue Jul 20, 2017 · 7 comments

Comments

@ezyang
Copy link
Contributor

ezyang commented Jul 20, 2017

ATen is presently has no const modifiers, even for methods which are obviously const (e.g. virtual bool isCuda() in Type). Make it more const-correct.

@ebetica
Copy link
Contributor

ebetica commented Aug 1, 2017

On a separate note, I really don't think things like https://github.com/zdevito/ATen/blob/master/doc/Functions.h#L451 should be const in the output.

@gchanan
Copy link
Contributor

gchanan commented Sep 19, 2017

Could all methods on Type be const (I haven't checked)? One thing that is more painful than necessary is passing around Types (that you know exist, so don't want the possible null allowed by pointer types); since they aren't const-correct, passing them by const reference doesn't work well; the next logical thing to try is by value, but that doesn't work either because they are abstract. So you end up passing by reference.

@zdevito
Copy link
Owner

zdevito commented Sep 19, 2017

Yes, I think all things on Type can be const.

@zdevito
Copy link
Owner

zdevito commented Sep 19, 2017

(Also, because Tensor is actually a shared pointer to the tensor implementation most things on Tensor can be const too! But it seems disingenuous to mark them so since 'const Tensor' is assumed to mean 'won't modify this tensor's data')

@ezyang
Copy link
Contributor Author

ezyang commented Sep 20, 2017

Pointed out to me by @achernya: what makes things tricky is that there is really common confusion about the precedence of const in an expression like const int*. The correct parsing is not "this is a const (int pointer)" but "this is a pointer to a (const int)". And this is why int* const is the actual way you spell "const pointer to an int".

Now, the problem is that Tensor morally expands to pointer to a (non-const) TensorImpl. And so if you say const Tensor, now the precedence goes the way you expect: you have a const (pointer to a (non-const) TensorImpl), which is not at all the same thing as const TensorImpl*. But this makes all the difference: the correct semantics for const Tensor lets you still mutate the underlying TensorImpl: it's just the pointer itself that's const.

This also explains why the following idea doesn't work: what if we hid the (mutable) TensorImpl pointer as private data, and provided only methods which enforced const-correctness? You could, but you'd be forced to take out the copy constructor. Otherwise, you can write this:

Tensor unconst(const Tensor& x) {
  return x;
}

C++ will happily copy construct a non-const Tensor from a const one, because, hey, it's a copy constructor, it does a semantic copy, right? But Tensor's copy constructor does nothing of this kind.

@ezyang
Copy link
Contributor Author

ezyang commented Sep 20, 2017

I'll also add, if we got rid of the copy constructor, then we could close this loophole, but I would assume giving up Tensor x = y.add(z) is too much to bear... (Another possibility: track constness dynamically, and raise an error if you call a non-const method when the dynamic constness is true.)

@zdevito
Copy link
Owner

zdevito commented Sep 20, 2017

Yes, the whole point of having Tensor rather than shared_ptr<Tensor> is to make sure the libraries API stays simple. It is intended to be directly usable as a lower-overhead way of accessing tensors compared to using a scripting language.

So I would caution against trying to get const-ness right at the cost of clarity or usability (for instance, adding a ConstTensor wrapper). All methods that modify a tensor are already annotated with _ or _out, so it is pretty clear what is going on from the API.

So far I've been accepting commits that improve usability because they make it possible to do things even though you ended up with something 'const Type &' due to C++ libraries or language behavior.

ajyu added a commit to ajyu/pytorch that referenced this issue Oct 12, 2018
Summary:
Mark Storage functions as const so that they they can be exposed outside of TensorImpl when calling storage()

Based on this discussion zdevito/ATen#27 (comment)

Also potentially useful in the effort to remove ShareExternalPointer

Differential Revision: D10370201

fbshipit-source-id: 0bab52dd67a0b01ce16eac6c19ea710f302f80d1
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this issue Oct 15, 2018
Summary:
Pull Request resolved: #12623

Mark Storage functions as const so that they they can be exposed outside of TensorImpl when calling storage()

Based on this discussion zdevito/ATen#27 (comment)

Also potentially useful in the effort to remove ShareExternalPointer

Reviewed By: ezyang

Differential Revision: D10370201

fbshipit-source-id: 43cf3803a4aa7b94fdf0c3a604d7db769ca0bdd5
zdevito pushed a commit that referenced this issue Oct 15, 2018
Summary:
Pull Request resolved: pytorch/pytorch#12623

Mark Storage functions as const so that they they can be exposed outside of TensorImpl when calling storage()

Based on this discussion #27 (comment)

Also potentially useful in the effort to remove ShareExternalPointer

Reviewed By: ezyang

Differential Revision: D10370201

fbshipit-source-id: 43cf3803a4aa7b94fdf0c3a604d7db769ca0bdd5
gchanan added a commit to gchanan/pytorch that referenced this issue Feb 12, 2019
'const Tensor' doesn't provide const safety -- see zdevito/ATen#27 for a description of why.

Therefore, these methods should be non-const, particularly if we change output arguments to be 'const Tensor &'
(to avoid the problem of reassigning the out arguments, which breaks the semantics of out functions).
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: 699c091cdce86520c6519f759dec74088ecfb5f2
Pull Request resolved: #28287
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: f8c9827ec97a36adaa0c8c952956d278ed25a2bc
Pull Request resolved: #28287
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 21, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit to pytorch/pytorch that referenced this issue Oct 21, 2019
This PR eliminates the static (but not dynamic) distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.  The dynamic distinction will be eliminated
in a later diff.

To do this, I need Tensor to have API parity with Variable.  Thanks
to the efforts of Will Feng and others, most of the hard work has
already been done; I just dump all public methods on Variable into
Tensor.  After doing this, there a few places the implementations
migrate:

- Some previously inline implementations only reference TensorImpl.
  This can be placed inline in TensorBody.h
- Some previously inline implementations reference AutogradMeta.
  For the time being, AutogradMeta continues to live in variable.h;
  thus, these implementations must move to be out-of-line,
  in Tensor.cpp
- However, there are also some template methods.  Those methods are
  retained variable.h
- Some previous implementations are defined in native_functions.yaml.
  In this case, I don't define them explicitly in Tensor; instead
  they are placed in VariableTypeManual.cpp.  When I did this, I would
  have deleted documentation; instead, this documentation was moved
  to native_functions.yaml
- All out-of-line implementations that don't fall under the previous
  category get put in Tensor.cpp.
- Private inline methods got turned into non-method helper functions.
  There was only one of these, _create_cpp_hook

I have to add a number of new forward declarations (and sometimes not
forward declarations) to Tensor.h.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)

I would have preferred to eliminate the dynamic distinction first,
but I wanted inline access to AutogradMeta in Tensor, and the
AutogradMeta struct references Variable (furthermore, I cannot
make it reference Tensor, as we return Variable by mutable reference
from grad() to support the "x.grad() = ..." idiom).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 13, 2021
… for resize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 14, 2021
…s for resize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 14, 2021
…ive functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 14, 2021
…rectness for resize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 14, 2021
…ize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 14, 2021
Pull Request resolved: #55351

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.
ghstack-source-id: 126522612

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 15, 2021
… native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 15, 2021
We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 16, 2021
… native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 16, 2021
We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
… native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
…correctness for resize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
…resize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
…ness for resize native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
…native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 20, 2021
Pull Request resolved: #55351

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.
ghstack-source-id: 126971182

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 21, 2021
… native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 21, 2021
We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 21, 2021
… native functions"

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Apr 21, 2021
We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.

Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)

[ghstack-poisoned]
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this issue Apr 21, 2021
Summary:
Pull Request resolved: #55351

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.
ghstack-source-id: 127092677

Test Plan: fitsships

Reviewed By: ezyang

Differential Revision: D27583983

fbshipit-source-id: 4eeeec85f5d268e9d0f1645eb9396914a9f9557f
krshrimali pushed a commit to krshrimali/pytorch that referenced this issue May 19, 2021
…55351)

Summary:
Pull Request resolved: pytorch#55351

We incorrectly used `Tensor&` to mean "the underlying
TensorImpl cannot be changed", as explained in
zdevito/ATen#27 (comment) .
This diff gets us on the path to fixing this problem: we have an
incremental way to fix individual native functions so that we can
apply any handwritten fixes a few at a time. It gets the migration
started with the `resize` family of native functions.
ghstack-source-id: 127092677

Test Plan: fitsships

Reviewed By: ezyang

Differential Revision: D27583983

fbshipit-source-id: 4eeeec85f5d268e9d0f1645eb9396914a9f9557f
swolchok added a commit to pytorch/pytorch that referenced this issue Jun 1, 2021
There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment))

Differential Revision: [D28811477](https://our.internmc.facebook.com/intern/diff/D28811477/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Jun 3, 2021
…r& convert(const Tensor&)`"

There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment))

Differential Revision: [D28811477](https://our.internmc.facebook.com/intern/diff/D28811477/)

[ghstack-poisoned]
swolchok added a commit to pytorch/pytorch that referenced this issue Jun 3, 2021
… Tensor&)`"

There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment))

Differential Revision: [D28811477](https://our.internmc.facebook.com/intern/diff/D28811477/)

[ghstack-poisoned]
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this issue Jun 4, 2021
…59268)

Summary:
Pull Request resolved: #59268

There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment))
ghstack-source-id: 130548716

Test Plan: CI

Reviewed By: SS-JIA

Differential Revision: D28811477

fbshipit-source-id: 269f58980c1f68b29d4be3cba4cd340299ce39af
deniskokarev pushed a commit to deniskokarev/pytorch that referenced this issue Jun 9, 2021
…ytorch#59268)

Summary:
Pull Request resolved: pytorch#59268

There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment))
ghstack-source-id: 130548716

Test Plan: CI

Reviewed By: SS-JIA

Differential Revision: D28811477

fbshipit-source-id: 269f58980c1f68b29d4be3cba4cd340299ce39af
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants