Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a few default args for numpy translation #20451

Closed

Conversation

umanwizard
Copy link
Contributor

Add automatic translations for a few argument names that commonly differ between PyTorch and NumPy.

For now, they are as follows:

  • keepdim -> keepdims
  • dim -> axis
  • input -> (any of a, x, x1)
  • other -> x2

Basic examples:

>>> t=torch.randn(10,10)
>>> torch.sum(x=t, axis=1)
tensor([ 0.5199, -0.3768,  4.3619, -0.9105,  1.1804,  1.0837, -0.9036,  0.2365,
         1.1171, -0.0999])
>>> torch.add(x1=5, x2=6)
tensor(11)

The additional overhead is zero when using traditional PyTorch argument names, and a few (usually 1) extra PyDict lookups when using NumPy argument names.

@pytorchbot pytorchbot added module: internals Related to internal abstractions in c10 and ATen module: operators module: pybind Related to our Python bindings / interactions with other Python libraries labels May 13, 2019
@umanwizard umanwizard added the module: numpy Related to numpy support, and also numpy compatibility of our operators label May 13, 2019
@umanwizard
Copy link
Contributor Author

Related to #2228

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

As I wrote inline, I'm ambivalent about the aliases for "input" and "other", but I'm not opposed to them.

static const std::unordered_map<std::string, std::vector<std::string>> numpy_compatibility_arg_names = {
{"dim", {"axis"}},
{"keepdim", {"keepdims"}},
{"input", {"x", "a", "x1"}},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit ambivalent about the input and other aliases. I'm not sure people use them as kwargs often enough to make it worthwhile. There's also a long tail of names that NumPy uses instead of "input", including "A", "arr", "ary", "m", "a1", "array".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit ambivalent...

I don't know how often people use them as kwargs. But I want to be as complete as possible.

There's also a long tail...

Yes, and I've also seen p, and who knows what else there is. I think we need to decide case-by-case, depending on how common they are, whether to include them in the default common args here, or annotate individual functions with alternate arg names (I will introduce a facility for doing that in a future PR).

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@umanwizard merged this pull request in 72bb84c.

VitalyFedyunin pushed a commit to VitalyFedyunin/pytorch that referenced this pull request May 15, 2019
Summary:
Add automatic translations for a few argument names that commonly differ between PyTorch and NumPy.

For now, they are as follows:

* `keepdim` -> `keepdims`
* `dim` -> `axis`
* `input` -> (any of `a`, `x`, `x1`)
* `other` -> `x2`

Basic examples:
```python
>>> t=torch.randn(10,10)
>>> torch.sum(x=t, axis=1)
tensor([ 0.5199, -0.3768,  4.3619, -0.9105,  1.1804,  1.0837, -0.9036,  0.2365,
         1.1171, -0.0999])
```
```python
>>> torch.add(x1=5, x2=6)
tensor(11)
```

The additional overhead is zero when using traditional PyTorch argument names, and a few (usually 1) extra PyDict lookups when using NumPy argument names.
Pull Request resolved: pytorch#20451

Differential Revision: D15337521

Pulled By: umanwizard

fbshipit-source-id: 7a7d389786f4ccf5c86a14ecb2002c61730c51b5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: internals Related to internal abstractions in c10 and ATen module: numpy Related to numpy support, and also numpy compatibility of our operators module: pybind Related to our Python bindings / interactions with other Python libraries
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants