Skip to content

Conversation

kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented May 27, 2022

As per title

TODO:

  • Add error inputs (already exist)

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 27, 2022

🔗 Helpful links

❌ 1 New Failures

As of commit 564a0b9 (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-clang7-onnx / test (default, 1, 2, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-06T16:05:52.1161483Z RuntimeError: No such operator custom_namespace::custom_op
2022-06-06T16:05:52.1137488Z 
2022-06-06T16:05:52.1137680Z 
2022-06-06T16:05:52.1151991Z [gw2] �[32mPASSED�[0m test/onnx/test_pytorch_onnx_onnxruntime.py 
2022-06-06T16:05:52.1153916Z 
2022-06-06T16:05:52.1155794Z 
2022-06-06T16:05:52.1157792Z ―――――――――――――― TestUtilityFuns_opset15.test_custom_op_fallthrough ――――――――――――――
2022-06-06T16:05:52.1158778Z [gw1] linux -- Python 3.7.13 /opt/conda/bin/python
2022-06-06T16:05:52.1159711Z Traceback (most recent call last):
2022-06-06T16:05:52.1160367Z   File "/opt/conda/lib/python3.7/site-packages/torch/_ops.py", line 198, in __getattr__
2022-06-06T16:05:52.1160981Z     op, overload_names = torch._C._jit_get_operation(qualified_op_name)
2022-06-06T16:05:52.1161483Z RuntimeError: No such operator custom_namespace::custom_op
2022-06-06T16:05:52.1161795Z 
2022-06-06T16:05:52.1162059Z The above exception was the direct cause of the following exception:
2022-06-06T16:05:52.1162376Z 
2022-06-06T16:05:52.1283235Z Traceback (most recent call last):
2022-06-06T16:05:52.1285978Z   File "/var/lib/jenkins/workspace/test/onnx/test_utility_funs.py", line 1041, in test_custom_op_fallthrough
2022-06-06T16:05:52.1286470Z     dynamic_axes={"x": [0, 1, 2], "y": [0, 1, 2]},
2022-06-06T16:05:52.1287015Z   File "/var/lib/jenkins/workspace/test/onnx/test_utility_funs.py", line 68, in _model_to_graph
2022-06-06T16:05:52.1318358Z     dynamic_axes=dynamic_axes,
2022-06-06T16:05:52.1319090Z   File "/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py", line 724, in _model_to_graph
2022-06-06T16:05:52.1319819Z     graph, params, torch_out, module = _create_jit_graph(model, args)

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Args:
input (Tensor): tensor to split.
indices_or_sections (Tensor, int or list or tuple of ints): See argument in :func:`torch.tensor_split`.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does not accept tensor

- func: hsplit.int(Tensor(a -> *) self, int sections) -> Tensor(a)[]
variants: function, method
- func: hsplit.array(Tensor(a -> *) self, int[] indices) -> Tensor(a)[]
variants: function, method

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch

@kshitij12345 kshitij12345 marked this pull request as ready for review May 27, 2022 15:00
@ejguan ejguan added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 31, 2022
dim = 0 if a.ndim == 1 else 1
if isinstance(indices_or_sections, int):
split_size = indices_or_sections
if not (split_size != 0 and a.shape[dim] % split_size == 0):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

double negation here is a little hard to read -- maybe propagate the not?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this error message redundant with error checking done by tensor_split?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this msg is better than the one that tensor_split throws

tensor_split: number of sections must be greater than 0, but was 0 vs
torch.hsplit attempted to split along dimension 1, but the size of the dimension 5 is not divisible by the split_size 0!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

+ str(split_size)
+ "!"
)
raise RuntimeError(msg)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

msg is here so that's cool but same questions as for hsplit

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, @kshitij12345! A couple inline comments for your review

"hsplit(): received an invalid combination of arguments. "
"Expected indices_or_sections to be of type int, list of ints or tuple of ints "
f"but got type {type(indices_or_sections)}"
)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if check can also take the type of error to raise. This would be changed to
check(isinstance(indices_or_sections, (list, tuple)), msg, error_type=TypeError)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @ezyang -- we're already seeing variants of check pop up for different error types

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional third argument sounds good to me

dim = 0 if a.ndim == 1 else 1
if isinstance(indices_or_sections, int):
split_size = indices_or_sections
msg = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing final checks -- what I don't like about this pattern is that we're constructing the message even when we don't error, cc @ezyang -- I think we have to check for the condition to construct the message only in the error state, then if we want to use check just check(False, ...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(the other thing to do is we could define a function that returns the string, which would also delay string construction and which check does accept. I'm not sure how costly defining a function is, however -- I'd prefer we hide as much as possible behind the error checks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why check takes a lambda; check(b, lambda: string_construction). Taking a string directly is wrong and won't actually work. I checked and dynamo is able to trace past the lambda construction.

def hsplit(
a: TensorLikeType, indices_or_sections: DimsType
) -> Tuple[TensorLikeType, ...]:
msg = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue here about guarding message construction on the error

@kshitij12345
Copy link
Collaborator Author

Gentle Ping @mruberry

return tuple(splits)


def hsplit(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I feel like a common helper could have implemented these but nbd

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

@mruberry
Copy link
Collaborator

mruberry commented Jun 6, 2022

This just needs a rebase to resolve the merge conflict, @kshitij12345

@kshitij12345
Copy link
Collaborator Author

Onnx failure looks unrelated to the PR. See #78844

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link
Contributor

github-actions bot commented Jun 6, 2022

Hey @kshitij12345.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Jun 7, 2022
Summary:
As per title

TODO:
* [x] Add error inputs (already exist)

Pull Request resolved: #78418
Approved by: https://github.com/mruberry

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/c461d8a9777ad54ab5a5587d8c67bf2d19bca894

Reviewed By: osalpekar

Differential Revision: D36959164

Pulled By: osalpekar

fbshipit-source-id: 6f0a6258742777a583cdc82d4e62b3eb1314d5bf
@mruberry mruberry added the topic: not user facing topic category label Jun 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: primTorch open source topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants