Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix trace op #340

Merged
merged 1 commit into from
Feb 17, 2023
Merged

Fix trace op #340

merged 1 commit into from
Feb 17, 2023

Conversation

Ronian526
Copy link
Collaborator

  • give warnings of converting int64 for reduction ops
  • use cast tensor for reduction sum on trace
  • unblock trace from running

- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running
@kulinseth kulinseth merged commit 6ace5f9 into master Feb 17, 2023
@DenisVieriu97 DenisVieriu97 added Upstreamed Change has been upstreamed to PyTorch master DV: In Progress labels Feb 21, 2023
kulinseth pushed a commit that referenced this pull request Feb 22, 2023
- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running
kulinseth added a commit that referenced this pull request Feb 28, 2023
Remove torch._six from test_mps (#326)

Fix test_zero_grad() (#330)

Fix bilinear backward pass (#331)

* Fix bilinear backward pass

* Remove comment

Update macOS 12 blocklist (#323)

* Update macOS 12 blocklist
- move sum, masked.var, mul to low precision list
- unblock them from running

* - mark __rdiv__ failures as accumulate error exceeds atol/rtol

Fix nn.functional.embedding grad (#335)

- casting the input tensor to float32 and cast back the output tensor
- unblock the test

Fix prelu backward (#334)

Reduction cast f16 to f32 only on macOS 12 (#332)

- unblock rdiv float16

Fix trace op (#340)

- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running

Update random result list (#339)

* - move nn.functional.feature_alpha_dropoutwith_train, normalnumber_mean, new_empty_strided to expected failures

* - update new_empty_strided

---------

Co-authored-by: Kulin Seth <kulin_seth@apple.com>

Enable int8 in TestConsistency (#347)

Dev/skotapati/copy broadcasting (#350)

* Handle broadcasting by expanding src tensor in Copy.mm

* Unblock linalg_matrix_power

* Improved formatting

Add the functionality to dump MPS ops.

1. DUMP_MPS_OPS to use LoggingTensor to dump out the ATen ops.
2. Skip running the EXPECTTEST list, as some tests are still
   seg-faulting

Fix lintrunner errors (#353)

* Fix lintrunner errors

* - move normal_in_place to random result list

Fixed the test_mps.

Test mps is updated.
DenisVieriu97 pushed a commit that referenced this pull request Feb 28, 2023
Remove torch._six from test_mps (#326)

Fix test_zero_grad() (#330)

Fix bilinear backward pass (#331)

* Fix bilinear backward pass

* Remove comment

Update macOS 12 blocklist (#323)

* Update macOS 12 blocklist
- move sum, masked.var, mul to low precision list
- unblock them from running

* - mark __rdiv__ failures as accumulate error exceeds atol/rtol

Fix nn.functional.embedding grad (#335)

- casting the input tensor to float32 and cast back the output tensor
- unblock the test

Fix prelu backward (#334)

Reduction cast f16 to f32 only on macOS 12 (#332)

- unblock rdiv float16

Fix trace op (#340)

- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running

Update random result list (#339)

* - move nn.functional.feature_alpha_dropoutwith_train, normalnumber_mean, new_empty_strided to expected failures

* - update new_empty_strided

---------

Co-authored-by: Kulin Seth <kulin_seth@apple.com>

Enable int8 in TestConsistency (#347)

Dev/skotapati/copy broadcasting (#350)

* Handle broadcasting by expanding src tensor in Copy.mm

* Unblock linalg_matrix_power

* Improved formatting

Add the functionality to dump MPS ops.

1. DUMP_MPS_OPS to use LoggingTensor to dump out the ATen ops.
2. Skip running the EXPECTTEST list, as some tests are still
   seg-faulting

Fix lintrunner errors (#353)

* Fix lintrunner errors

* - move normal_in_place to random result list

Fixed the test_mps.
skotapati pushed a commit that referenced this pull request Apr 7, 2023
Remove torch._six from test_mps (#326)

Fix test_zero_grad() (#330)

Fix bilinear backward pass (#331)

* Fix bilinear backward pass

* Remove comment

Update macOS 12 blocklist (#323)

* Update macOS 12 blocklist
- move sum, masked.var, mul to low precision list
- unblock them from running

* - mark __rdiv__ failures as accumulate error exceeds atol/rtol

Fix nn.functional.embedding grad (#335)

- casting the input tensor to float32 and cast back the output tensor
- unblock the test

Fix prelu backward (#334)

Reduction cast f16 to f32 only on macOS 12 (#332)

- unblock rdiv float16

Fix trace op (#340)

- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running

Update random result list (#339)

* - move nn.functional.feature_alpha_dropoutwith_train, normalnumber_mean, new_empty_strided to expected failures

* - update new_empty_strided

---------

Co-authored-by: Kulin Seth <kulin_seth@apple.com>

Enable int8 in TestConsistency (#347)

Dev/skotapati/copy broadcasting (#350)

* Handle broadcasting by expanding src tensor in Copy.mm

* Unblock linalg_matrix_power

* Improved formatting

Add the functionality to dump MPS ops.

1. DUMP_MPS_OPS to use LoggingTensor to dump out the ATen ops.
2. Skip running the EXPECTTEST list, as some tests are still
   seg-faulting

Fix lintrunner errors (#353)

* Fix lintrunner errors

* - move normal_in_place to random result list

Fixed the test_mps.

Test mps is updated.
jhavukainen pushed a commit that referenced this pull request Mar 15, 2024
Remove torch._six from test_mps (#326)

Fix test_zero_grad() (#330)

Fix bilinear backward pass (#331)

* Fix bilinear backward pass

* Remove comment

Update macOS 12 blocklist (#323)

* Update macOS 12 blocklist
- move sum, masked.var, mul to low precision list
- unblock them from running

* - mark __rdiv__ failures as accumulate error exceeds atol/rtol

Fix nn.functional.embedding grad (#335)

- casting the input tensor to float32 and cast back the output tensor
- unblock the test

Fix prelu backward (#334)

Reduction cast f16 to f32 only on macOS 12 (#332)

- unblock rdiv float16

Fix trace op (#340)

- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running

Update random result list (#339)

* - move nn.functional.feature_alpha_dropoutwith_train, normalnumber_mean, new_empty_strided to expected failures

* - update new_empty_strided

---------

Co-authored-by: Kulin Seth <kulin_seth@apple.com>

Enable int8 in TestConsistency (#347)

Dev/skotapati/copy broadcasting (#350)

* Handle broadcasting by expanding src tensor in Copy.mm

* Unblock linalg_matrix_power

* Improved formatting

Add the functionality to dump MPS ops.

1. DUMP_MPS_OPS to use LoggingTensor to dump out the ATen ops.
2. Skip running the EXPECTTEST list, as some tests are still
   seg-faulting

Fix lintrunner errors (#353)

* Fix lintrunner errors

* - move normal_in_place to random result list

Fixed the test_mps.

Test mps is updated.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DV: In Progress Upstreamed Change has been upstreamed to PyTorch master
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants