Skip to content

Added torch.linalg.matrix_power #52608

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

heitorschueroff
Copy link
Contributor

@heitorschueroff heitorschueroff commented Feb 22, 2021

Stack from ghstack:

TODO

  • Add OpInfo
  • Update documentation
  • Add more tests and compare against NumPy

Differential Revision: D27261532

heitorschueroff added a commit that referenced this pull request Feb 22, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 22, 2021

💊 CI failures summary and remediations

As of commit 8a24747 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-scanned failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

…rix_power"


**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [ ] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
…rix_power"


**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
heitorschueroff added a commit that referenced this pull request Feb 24, 2021
@heitorschueroff heitorschueroff marked this pull request as ready for review February 24, 2021 22:19
@heitorschueroff heitorschueroff changed the title Deprecated torch.matrix_power in favor of torch.linalg.matrix_power Added torch.linalg.matrix_power Feb 24, 2021
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
heitorschueroff added a commit that referenced this pull request Feb 25, 2021
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me except for the out= behavior, which would be nice to address (and maybe providing the helper function for safe copying we've discussed before).

The ideep/tensorpipe changes should be reverted and the PR rebased past the mypy failures in its base.

The plan is for a follow-up PR to alias torch.matrix_power to torch.linalg.matrix_power, right?

@IvanYashchuk would you like to take a look?

@IvanYashchuk IvanYashchuk self-requested a review March 1, 2021 15:49
Copy link
Collaborator

@IvanYashchuk IvanYashchuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@heitorschueroff could you also post the runtime for the OpInfo checks? (--durations=0 option for pytest)

**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
heitorschueroff added a commit that referenced this pull request Mar 8, 2021
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
@heitorschueroff
Copy link
Contributor Author

Looks good to me!

@heitorschueroff could you also post the runtime for the OpInfo checks? (--durations=0 option for pytest)

======================================================================================================= slowest durations =======================================================================================================
11.75s call     test/test_ops.py::TestGradientsCUDA::test_fn_gradgrad_linalg_matrix_power_cuda_complex128
5.45s call     test/test_ops.py::TestGradientsCPU::test_fn_gradgrad_linalg_matrix_power_cpu_complex128
3.07s call     test/test_ops.py::TestGradientsCUDA::test_fn_gradgrad_linalg_matrix_power_cuda_float64
2.65s call     test/test_ops.py::TestOpInfoCUDA::test_supported_dtypes_linalg_matrix_power_cuda_complex128
1.73s call     test/test_ops.py::TestGradientsCUDA::test_fn_grad_linalg_matrix_power_cuda_complex128
1.65s call     test/test_ops.py::TestGradientsCPU::test_fn_gradgrad_linalg_matrix_power_cpu_float64
1.32s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_jit_linalg_matrix_power_cuda_complex128
1.32s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_jit_linalg_matrix_power_cuda_complex64
1.20s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_jit_linalg_matrix_power_cuda_float32
1.19s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_jit_linalg_matrix_power_cuda_float64
1.04s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_jit_linalg_matrix_power_cpu_complex128
1.00s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_jit_linalg_matrix_power_cpu_complex64
0.95s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_jit_linalg_matrix_power_cpu_float32
0.93s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_jit_linalg_matrix_power_cpu_float64
0.65s call     test/test_ops.py::TestGradientsCPU::test_fn_grad_linalg_matrix_power_cpu_complex128
0.56s call     test/test_ops.py::TestGradientsCUDA::test_fn_grad_linalg_matrix_power_cuda_float64
0.22s call     test/test_ops.py::TestGradientsCPU::test_fn_grad_linalg_matrix_power_cpu_float64
0.13s call     test/test_ops.py::TestOpInfoCPU::test_supported_dtypes_linalg_matrix_power_cpu_complex128
0.11s call     test/test_ops.py::TestCommonCUDA::test_out_linalg_matrix_power_cuda_float32
0.11s call     test/test_ops.py::TestOpInfoCUDA::test_supported_dtypes_linalg_matrix_power_cuda_complex64
0.10s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_matrix_power_cuda_complex64
0.10s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_matrix_power_cuda_complex128
0.10s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_matrix_power_cuda_float32
0.09s call     test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_matrix_power_cuda_float64
0.09s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_bool
0.09s call     test/test_ops.py::TestOpInfoCUDA::test_supported_dtypes_linalg_matrix_power_cuda_float64
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_supported_dtypes_linalg_matrix_power_cuda_float32
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_bfloat16
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_int16
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_uint8
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_int8
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_float16
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_int64
0.08s call     test/test_ops.py::TestOpInfoCUDA::test_unsupported_dtypes_linalg_matrix_power_cuda_int32
0.05s call     test/test_ops.py::TestGradientsCUDA::test_inplace_grad_linalg_matrix_power_cuda_complex128
0.04s call     test/test_ops.py::TestGradientsCUDA::test_inplace_gradgrad_linalg_matrix_power_cuda_complex128
0.04s call     test/test_ops.py::TestGradientsCUDA::test_inplace_gradgrad_linalg_matrix_power_cuda_float64
0.04s call     test/test_ops.py::TestGradientsCUDA::test_inplace_grad_linalg_matrix_power_cuda_float64
0.01s call     test/test_ops.py::TestCommonCPU::test_out_linalg_matrix_power_cpu_float32
0.01s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_linalg_matrix_power_cpu_complex64
0.01s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_linalg_matrix_power_cpu_complex128
0.01s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_linalg_matrix_power_cpu_float64
0.01s call     test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_linalg_matrix_power_cpu_float32
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_bool
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_int16
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_int64
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_int32
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_int8
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_uint8
0.01s call     test/test_ops.py::TestOpInfoCPU::test_unsupported_dtypes_linalg_matrix_power_cpu_bfloat16

(124 durations < 0.005s hidden.  Use -vv to show these durations.)
========================================================================================

**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, @heitorschueroff!

I think the test failures are erroneous. You may want to rebase to verify the docs build failure isn't related to this PR.

**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy
- [ ] Benchmark against NumPy

[ghstack-poisoned]
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This accidentally picked up some fbgemm and tensorpipe changes that should be reverted.

**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy

[ghstack-poisoned]
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy

[ghstack-poisoned]
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy

[ghstack-poisoned]
**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy

Differential Revision: [D27260630](https://our.internmc.facebook.com/intern/diff/D27260630)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Mar 23, 2021
…53538)

Summary:
Pull Request resolved: #53538

* #52608 Added torch.linalg.matrix_power

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D27261531

Pulled By: heitorschueroff

fbshipit-source-id: 5a944b390f3cc6896c2aa92ba467319ddc9309e4
@facebook-github-bot
Copy link
Contributor

@heitorschueroff merged this pull request in f9e7f13.

@facebook-github-bot facebook-github-bot deleted the gh/heitorschueroff/52/head branch March 27, 2021 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants