New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
empty_like
, to
, resize_as_
and clone
now preserve memory format
#23899
empty_like
, to
, resize_as_
and clone
now preserve memory format
#23899
Conversation
empty_like
, to
and clone
now preserve memory formatempty_like
, to
, resize_as_
and clone
now preserve memory format
…e_empty_supports_memory_format
empty_like
, to
, resize_as_
and clone
now preserve memory formatempty_like
, to
, resize_as_
and clone
now preserve memory format
This is the first time I've seen |
what's the plan for the flag? Is it just for temporary transitioning and it will be removed for the next release? I don't believe we can actually correctly write code that works for both states of the flag. Also, is there any writeup on the plan for resize / resize_as? The behavior is currently pretty unsafe and crazy, and I don't really understand if we'll end up in a good state eventually. |
We discussed ability to enable/disable this feature. But not in details, that is why I want you and @gchanan to see this PR first.
Let me write detailed plan, but hight level is:
|
OK, so it is going to be a "this is how we migrate people to the new behavior" flag. That's fine, but the intent needs to much more clearly communicated, both in the PR description, and also where it lives in the code. I'll give some suggestions inline. |
just to be explicit: I don't believe we should do a release with this flag available. It's too burdensome to write code when you have to think about both states of this flag. It's fine while in-development though. |
_C._set_memory_format_propagation(b) | ||
|
||
def get_memory_format_propagation(): | ||
return _C._get_memory_format_propagation() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to bikeshed the API here. I'll give justification for the bikeshedding I suggest.
First, let's move this out of torch
top-level namespace, and into a module like torch.__future__
. Because the intent of this is to NOT be a permanent feature toggle, but rather a way to opt into what will (eventually) be the default behavior, it needs to be extremely clear to end users that this is NOT a supported way to toggle between one mode or another, it is purely a BC knob. In Python, there is an existing convention for this: from __future__ import absolute_import
, for example. I believe that using a name similar to __future__
will inform people about the intent appropriately.
Second, let's not have a getter/setter function, but instead have this be a regular variable living in the namespace, e.g., torch.__future__.propagate_memory_format = True
. This brings this toggle in line with other similar global toggles we have, such as torch.backends.cudnn.enabled = True
.
I also have some deeper points about the semantics of this flag. More on this shortly.
#include <c10/core/MemoryFormat.h> | ||
namespace c10 { | ||
namespace { | ||
thread_local bool memory_format_propagation_enabled = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gchanan has stated that it is too difficult to write code that works under both settings of this toggle, and thus we should not release with this toggle available to users. I have an alternate proposal, which I have written up at #24261 for ensuring this toggle only affects user code, and not internal code.
@@ -183,7 +183,7 @@ class CAFFE2_API Tensor { | |||
} | |||
|
|||
at::MemoryFormat suggest_memory_format() const { | |||
if (impl_->is_strides_like_channels_last()) { | |||
if (get_memory_format_propagation() && impl_->is_strides_like_channels_last()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So... this is reverting a change that you made previously?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will likely be solved through a merge conflict later, but just to point out the potential issue that was addressed in my pending PR: https://github.com/pytorch/pytorch/pull/23861/files#diff-4632522f237f1e4e728cb824300403acR188
@@ -183,7 +183,7 @@ class CAFFE2_API Tensor { | |||
} | |||
|
|||
at::MemoryFormat suggest_memory_format() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, what are the new semantics of suggest_memory_format
? Can we get a docstring explaining what exactly this function does, and when you should call it?
…e_empty_supports_memory_format
This PR adds ability to switch behavior of
empty_like
,to
,cuda
,resize_as_
andclone
operators.If
set_memory_format_propagation
set toTrue
. The output of these operators will be channel last tensor if input tensor is channel last.It also changes
suggest_memory_format
function, and makes is sensitive tomemory_format_proparation
switch.CC #23403
No noticeable performance impact.