-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Const-correctness #27
Comments
On a separate note, I really don't think things like https://github.com/zdevito/ATen/blob/master/doc/Functions.h#L451 should be const in the output. |
Could all methods on Type be const (I haven't checked)? One thing that is more painful than necessary is passing around Types (that you know exist, so don't want the possible null allowed by pointer types); since they aren't const-correct, passing them by const reference doesn't work well; the next logical thing to try is by value, but that doesn't work either because they are abstract. So you end up passing by reference. |
Yes, I think all things on Type can be const. |
(Also, because Tensor is actually a shared pointer to the tensor implementation most things on Tensor can be const too! But it seems disingenuous to mark them so since 'const Tensor' is assumed to mean 'won't modify this tensor's data') |
Pointed out to me by @achernya: what makes things tricky is that there is really common confusion about the precedence of Now, the problem is that This also explains why the following idea doesn't work: what if we hid the (mutable) TensorImpl pointer as private data, and provided only methods which enforced const-correctness? You could, but you'd be forced to take out the copy constructor. Otherwise, you can write this:
C++ will happily copy construct a non-const Tensor from a const one, because, hey, it's a copy constructor, it does a semantic copy, right? But |
I'll also add, if we got rid of the copy constructor, then we could close this loophole, but I would assume giving up |
Yes, the whole point of having So I would caution against trying to get const-ness right at the cost of clarity or usability (for instance, adding a ConstTensor wrapper). All methods that modify a tensor are already annotated with So far I've been accepting commits that improve usability because they make it possible to do things even though you ended up with something 'const Type &' due to C++ libraries or language behavior. |
Summary: Mark Storage functions as const so that they they can be exposed outside of TensorImpl when calling storage() Based on this discussion zdevito/ATen#27 (comment) Also potentially useful in the effort to remove ShareExternalPointer Differential Revision: D10370201 fbshipit-source-id: 0bab52dd67a0b01ce16eac6c19ea710f302f80d1
Summary: Pull Request resolved: #12623 Mark Storage functions as const so that they they can be exposed outside of TensorImpl when calling storage() Based on this discussion zdevito/ATen#27 (comment) Also potentially useful in the effort to remove ShareExternalPointer Reviewed By: ezyang Differential Revision: D10370201 fbshipit-source-id: 43cf3803a4aa7b94fdf0c3a604d7db769ca0bdd5
Summary: Pull Request resolved: pytorch/pytorch#12623 Mark Storage functions as const so that they they can be exposed outside of TensorImpl when calling storage() Based on this discussion #27 (comment) Also potentially useful in the effort to remove ShareExternalPointer Reviewed By: ezyang Differential Revision: D10370201 fbshipit-source-id: 43cf3803a4aa7b94fdf0c3a604d7db769ca0bdd5
'const Tensor' doesn't provide const safety -- see zdevito/ATen#27 for a description of why. Therefore, these methods should be non-const, particularly if we change output arguments to be 'const Tensor &' (to avoid the problem of reassigning the out arguments, which breaks the semantics of out functions).
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 699c091cdce86520c6519f759dec74088ecfb5f2 Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: f8c9827ec97a36adaa0c8c952956d278ed25a2bc Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
… for resize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…s for resize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…ive functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…rectness for resize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…ize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
Pull Request resolved: #55351 We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. ghstack-source-id: 126522612 Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)
… native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
… native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
… native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…correctness for resize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…resize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…ness for resize native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
…native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
Pull Request resolved: #55351 We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. ghstack-source-id: 126971182 Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/)
… native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
… native functions" We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. Differential Revision: [D27583983](https://our.internmc.facebook.com/intern/diff/D27583983/) [ghstack-poisoned]
Summary: Pull Request resolved: #55351 We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. ghstack-source-id: 127092677 Test Plan: fitsships Reviewed By: ezyang Differential Revision: D27583983 fbshipit-source-id: 4eeeec85f5d268e9d0f1645eb9396914a9f9557f
…55351) Summary: Pull Request resolved: pytorch#55351 We incorrectly used `Tensor&` to mean "the underlying TensorImpl cannot be changed", as explained in zdevito/ATen#27 (comment) . This diff gets us on the path to fixing this problem: we have an incremental way to fix individual native functions so that we can apply any handwritten fixes a few at a time. It gets the migration started with the `resize` family of native functions. ghstack-source-id: 127092677 Test Plan: fitsships Reviewed By: ezyang Differential Revision: D27583983 fbshipit-source-id: 4eeeec85f5d268e9d0f1645eb9396914a9f9557f
There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment)) Differential Revision: [D28811477](https://our.internmc.facebook.com/intern/diff/D28811477/) [ghstack-poisoned]
…r& convert(const Tensor&)`" There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment)) Differential Revision: [D28811477](https://our.internmc.facebook.com/intern/diff/D28811477/) [ghstack-poisoned]
… Tensor&)`" There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment)) Differential Revision: [D28811477](https://our.internmc.facebook.com/intern/diff/D28811477/) [ghstack-poisoned]
…59268) Summary: Pull Request resolved: #59268 There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment)) ghstack-source-id: 130548716 Test Plan: CI Reviewed By: SS-JIA Differential Revision: D28811477 fbshipit-source-id: 269f58980c1f68b29d4be3cba4cd340299ce39af
…ytorch#59268) Summary: Pull Request resolved: pytorch#59268 There's no reason we can't give `convert` this signature: `Tensor::unsafeGetTensorImpl() cocnst ` returns a non-const TensorImpl pointer. (See zdevito/ATen#27 (comment)) ghstack-source-id: 130548716 Test Plan: CI Reviewed By: SS-JIA Differential Revision: D28811477 fbshipit-source-id: 269f58980c1f68b29d4be3cba4cd340299ce39af
ATen is presently has no const modifiers, even for methods which are obviously const (e.g.
virtual bool isCuda()
in Type). Make it more const-correct.The text was updated successfully, but these errors were encountered: