Skip to content

Conversation

@jjsjann123
Copy link
Collaborator

  1. Added references for the two ops;
  2. Inherited original operators' OpInfo tests;

TODO for future PR:
adding primTorch references for dsplit and dstack. <- Those two should use atleast_3d which is in a different packet right now.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 27, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 9864de3 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@ejguan ejguan added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: primTorch labels May 31, 2022
return list(_maybe_broadcast(*tensors, preserve_cpu_scalar_tensors=False))


def broadcast_to(a: TensorLikeType, size: ShapeType) -> TensorLikeType:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

@out_wrapper
def column_stack(tensors: TensorSequenceType) -> TensorLikeType:
aligned_tensors = [
x if x.ndim > 1 else torch._prims.expand_dims(x, list(range(x.ndim, 2)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just prims. is fine (as with broadcast_to impl)

x if x.ndim > 1 else torch._prims.expand_dims(x, list(range(x.ndim, 2)))
for x in tensors
]
return prims.cat(aligned_tensors, 1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer using the reference to the prim (so just cat(...))

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason that we prefer to use references inside _refs instead of prim? Using prims in references looks like a cleaner implementation.


@out_wrapper
def column_stack(tensors: TensorSequenceType) -> TensorLikeType:
aligned_tensors = [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style nit: tuples are better than lists if not modifying the container

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @jjsjann123! See a few inline notes, then merge this when you like using pytorchbot

@jjsjann123
Copy link
Collaborator Author

@pytorchbot merge this

@github-actions
Copy link
Contributor

github-actions bot commented Jun 1, 2022

Hey @jjsjann123.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Jun 2, 2022
Summary:
1. Added references for the two ops;
2. Inherited original operators' OpInfo tests;

TODO for future PR:
adding primTorch references for `dsplit` and `dstack`. <- Those two should use `atleast_3d` which is in a different packet right now.

Pull Request resolved: #78416
Approved by: https://github.com/mruberry

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/7ea9c6edc2f0dd2187db5f42359df2ddc7b503fe

Reviewed By: seemethere

Differential Revision: D36815590

Pulled By: seemethere

fbshipit-source-id: 57feb0b546e198b4675c346d15be7c7cfe287cc7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: primTorch open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants