New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Do not review] [For discussion only] Simulate tensor attrs #103353
Conversation
Initial Initial [ghstack-poisoned]
Initial Initial ghstack-source-id: 9842f962404f13063d9c1e8f322e7dd195339194 Pull Request resolved: #103353
Initial Initial cc penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx ipiszy aakhundov [ghstack-poisoned]
Initial Initial ghstack-source-id: 2fccdcb7f39f395fced8c934944c091a66a44975 Pull Request resolved: #103353 lint
@@ -1132,6 +1133,16 @@ def _clone_input(value): | |||
} | |||
assert "source" in options and options["source"] is not None | |||
kwargs["source"] = options["source"] | |||
|
|||
if not isinstance(example_value, torch._subclasses.fake_tensor.FakeTensor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nitty thing, but I want to support running dynamo on programs involving fake tensors, so we shouldn't use this type of isinstance test to figure out if we are "internal" or "external" tensor
@@ -100,6 +101,7 @@ def __init__( | |||
self.is_sparse = is_sparse | |||
self.class_type = class_type | |||
self.specialized_value = specialized_value | |||
self.tensor_dict = tensor_dict |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is nothing tensor specific about this handling. All defined in Python objects have __dict__
and we should uniformly apply this handling to all objects
@@ -236,6 +238,11 @@ def try_generic_attr_handling(): | |||
|
|||
result = try_generic_attr_handling() | |||
|
|||
if name in self.tensor_dict.items: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems improper to do this if result is not None
self.tensor_dict = self.tensor_dict.call_method( | ||
tx, "__setitem__", args, kwargs | ||
) | ||
return ConstantVariable(None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To implement this pedantically correctly, we must consider the interplay between setattr and potential shadowing of class members. For example:
>>> x = torch.randn(2, 2)
>>> x.T
tensor([[ 1.0841, 0.6834],
[-1.3133, -0.1715]])
>>> x.T = "foo"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute 'T' of 'torch._C._TensorBase' objects is not writable
>>> x.sum = 2
>>> x.sum
2
The best way to do this is look at the default tp_setattr
and tp_getattr
implementations in CPython, and implement them faithfully. It probably would help to do a refactor to the big if-else statement here to use a dispatch table on name.
Otherwise, it will be difficult to review whether or not the implementation has been done correctly, as there will be a lot of edge cases to consider.
Stack from ghstack (oldest at bottom):
Initial
Initial
cc @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @aakhundov