Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Remove torch._six from test_mps (#326) Fix test_zero_grad() (#330) Fix bilinear backward pass (#331) * Fix bilinear backward pass * Remove comment Update macOS 12 blocklist (#323) * Update macOS 12 blocklist - move sum, masked.var, mul to low precision list - unblock them from running * - mark __rdiv__ failures as accumulate error exceeds atol/rtol Fix nn.functional.embedding grad (#335) - casting the input tensor to float32 and cast back the output tensor - unblock the test Fix prelu backward (#334) Reduction cast f16 to f32 only on macOS 12 (#332) - unblock rdiv float16 Fix trace op (#340) - give warnings of converting int64 for reduction ops - use cast tensor for reduction sum on trace - unblock trace from running Update random result list (#339) * - move nn.functional.feature_alpha_dropoutwith_train, normalnumber_mean, new_empty_strided to expected failures * - update new_empty_strided --------- Co-authored-by: Kulin Seth <kulin_seth@apple.com> Enable int8 in TestConsistency (#347) Dev/skotapati/copy broadcasting (#350) * Handle broadcasting by expanding src tensor in Copy.mm * Unblock linalg_matrix_power * Improved formatting Add the functionality to dump MPS ops. 1. DUMP_MPS_OPS to use LoggingTensor to dump out the ATen ops. 2. Skip running the EXPECTTEST list, as some tests are still seg-faulting Fix lintrunner errors (#353) * Fix lintrunner errors * - move normal_in_place to random result list Fixed the test_mps. Test mps is updated.
- Loading branch information