Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute convolution in NCHW if the suggested mem format is NHWC but the actual mem layout is NCHW #303

Open
wants to merge 30 commits into
base: master
Choose a base branch
from

Commits on Feb 14, 2023

  1. Add back support for PYTORCH_TEST_WITH_MPS (#66)

    Fix the TEST_WITH_MPS macro.
    DenisVieriu97 authored and kulinseth committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    23d334b View commit details
    Browse the repository at this point in the history
  2. Enable MPS CI runners (#252)

    * Test MPS CI runners
    
    * Cherry pick remaining files
    
    * Enable lintrunner:
    
    * Change lint  runner
    
    * Retrigger checks
    
    * Retrigger checks #2
    
    * Retrigger checks #3
    
    * Retrigger checks #4
    
    * Retrigger checks #5
    
    * Retrigger checks #5
    
    * Retrigger checks #7
    
    * Retrigger checks #8
    
    * Retrigger checks #9
    
    * Retrigger checks #9 (change arch to arm)
    
    * Retrigger checks #10
    
    * Retrigger checks #11
    
    * Retrigger checks #12
    
    * Retrigger checks #13
    
    * Retrigger checks #14
    
    * Retrigger checks #14
    
    * Retrigger checks #15
    
    * Retrigger checks #16
    
    * Retrigger checks #16
    
    * Retrigger checks #17
    
    * Retrigger checks #19
    
    * Retrigger checks #20
    
    * Retrigger checks #21
    
    * Fix lintrunner
    
    * Fix lintrunner
    
    * Remove lint.json
    DenisVieriu97 authored and kulinseth committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    12085cf View commit details
    Browse the repository at this point in the history
  3. Use DISTRIBUTED=1 for MPS CI runners (#292)

    * Use DISTRIBUTED=1 for MPS CI runners
    
    * Disable openmp
    DenisVieriu97 authored and kulinseth committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    e8f89df View commit details
    Browse the repository at this point in the history
  4. Update the test mps.

    kulinseth committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    85cdb98 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    66951a0 View commit details
    Browse the repository at this point in the history
  6. Remove unnecessary CI files (#327)

    * Remove unnecessary CI files
    
    * Additional files
    
    * Update lint
    DenisVieriu97 committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    5ada241 View commit details
    Browse the repository at this point in the history
  7. Enable test modules on MPS and CI runners (#305) (#324)

    * Enable test modules on MPS and CI runners
    
    * Update lint.yml
    
    * Update comments
    
    * Retrigger CI
    
    * Retrigger CI #2
    
    * Remove comment
    DenisVieriu97 committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    bf8eba9 View commit details
    Browse the repository at this point in the history
  8. [CHERRY-PICK] Block uint8 data type for unary and binary ops on macOS…

    … 12. (#313) (#328)
    
    * Block uint8 data type for unary and binary ops on macOS 12. (#313)
    
    * fixes after cherry-pick
    
    ---------
    
    Co-authored-by: Ronian526 <11454459+Ronian526@users.noreply.github.com>
    DenisVieriu97 and Ronian526 committed Feb 14, 2023
    Configuration menu
    Copy the full SHA
    2f336a4 View commit details
    Browse the repository at this point in the history

Commits on Feb 15, 2023

  1. Fix test_zero_grad() (#330)

    razarmehr committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    108cdc0 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    8de3315 View commit details
    Browse the repository at this point in the history
  3. Fix bilinear backward pass (#331)

    * Fix bilinear backward pass
    
    * Remove comment
    DenisVieriu97 committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    051bc9c View commit details
    Browse the repository at this point in the history
  4. Update macOS 12 blocklist (#323)

    * Update macOS 12 blocklist
    - move sum, masked.var, mul to low precision list
    - unblock them from running
    
    * - mark __rdiv__ failures as accumulate error exceeds atol/rtol
    Ronian526 committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    1b09ea2 View commit details
    Browse the repository at this point in the history
  5. [MPS] Fixes for LSTM. (#319)

    - Backward pass has to give explicit bias tensor of zeros if none is passed to the op or the bias gradient will not be calculated.
    - Fixed bias tensor mistakenly getting overwritten to zeros
    - Fixes crash when lstm op called with has_biases set to false. Change takes into account the changed shape of the input params TensorList depending on the bias flag.
    
    Co-authored-by: Kulin Seth <kulin_seth@apple.com>
    jhavukainen and kulinseth committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    8c7df6f View commit details
    Browse the repository at this point in the history
  6. Fix nn.functional.conv_transpose2d grad (#312) (#329)

    - add _mps_convolution_impl that takes optional shape
    - for conv_tranpose2d grad, use the shape from input directly
    - remove nn.functional.conv_transpose2d grad from blocklist
    
    Co-authored-by: Ronian526 <11454459+Ronian526@users.noreply.github.com>
    DenisVieriu97 and Ronian526 committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    d42f74f View commit details
    Browse the repository at this point in the history
  7. Fix the crash in elu_backward() (#333)

    Fixes a crash where the inputTensor could go null and cause a crash.
    razarmehr committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    2856203 View commit details
    Browse the repository at this point in the history
  8. Fix nn.functional.embedding grad (#335)

    - casting the input tensor to float32 and cast back the output tensor
    - unblock the test
    Ronian526 committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    18797b0 View commit details
    Browse the repository at this point in the history
  9. Configuration menu
    Copy the full SHA
    cf06ac5 View commit details
    Browse the repository at this point in the history
  10. Reduction cast f16 to f32 only on macOS 12 (#332)

    - unblock rdiv float16
    Ronian526 committed Feb 15, 2023
    Configuration menu
    Copy the full SHA
    c65b823 View commit details
    Browse the repository at this point in the history

Commits on Feb 16, 2023

  1. Configuration menu
    Copy the full SHA
    73f7068 View commit details
    Browse the repository at this point in the history
  2. Fix upsample for NHWC output (#337)

    * Fix upsample for NHWC output
    
    * Add testcase
    DenisVieriu97 committed Feb 16, 2023
    Configuration menu
    Copy the full SHA
    1c8f126 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    42be72a View commit details
    Browse the repository at this point in the history

Commits on Feb 17, 2023

  1. Fix trace op (#340)

    - give warnings of converting int64 for reduction ops
    - use cast tensor for reduction sum on trace
    - unblock trace from running
    Ronian526 committed Feb 17, 2023
    Configuration menu
    Copy the full SHA
    6ace5f9 View commit details
    Browse the repository at this point in the history
  2. Update random result list (#339)

    * - move nn.functional.feature_alpha_dropoutwith_train, normalnumber_mean, new_empty_strided to expected failures
    
    * - update new_empty_strided
    
    ---------
    
    Co-authored-by: Kulin Seth <kulin_seth@apple.com>
    Ronian526 and kulinseth committed Feb 17, 2023
    Configuration menu
    Copy the full SHA
    c9b8ab7 View commit details
    Browse the repository at this point in the history
  3. Fix convolution crash in backward with weights; remove unnecessary co…

    …ntiguous calls (#341)
    
    * Fix convolution crash; remove unnecessary contiguous calls
    
    * Fix lintrunner
    DenisVieriu97 committed Feb 17, 2023
    Configuration menu
    Copy the full SHA
    d3e414e View commit details
    Browse the repository at this point in the history
  4. Fix copy_cast_mps() on tensors with storage offset (#343)

    This should fix the failure with GPT2 when use_cache=True
    razarmehr committed Feb 17, 2023
    Configuration menu
    Copy the full SHA
    be8817b View commit details
    Browse the repository at this point in the history

Commits on Feb 18, 2023

  1. Configuration menu
    Copy the full SHA
    8e37116 View commit details
    Browse the repository at this point in the history

Commits on Feb 21, 2023

  1. Convolution cleanup (#346)

    Co-authored-by: Ramin Azarmehr <razarmehr@apple.com>
    DenisVieriu97 and razarmehr committed Feb 21, 2023
    Configuration menu
    Copy the full SHA
    c30946a View commit details
    Browse the repository at this point in the history
  2. Dev/skotapati/copy broadcasting (#350)

    * Handle broadcasting by expanding src tensor in Copy.mm
    
    * Unblock linalg_matrix_power
    
    * Improved formatting
    skotapati committed Feb 21, 2023
    Configuration menu
    Copy the full SHA
    b520970 View commit details
    Browse the repository at this point in the history
  3. Execute convolution in NCHW if the suggested mem format is NHWC but t…

    …he actual mem layout is NCHW
    DenisVieriu97 committed Feb 21, 2023
    Configuration menu
    Copy the full SHA
    0473fe8 View commit details
    Browse the repository at this point in the history
  4. Fix build failure

    DenisVieriu97 committed Feb 21, 2023
    Configuration menu
    Copy the full SHA
    9a5b002 View commit details
    Browse the repository at this point in the history