{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":572426325,"defaultBranch":"master","name":"pytorch","ownerLogin":"yaox12","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2022-11-30T08:49:06.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/3831900?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1704165849.0","currentOid":""},"activityList":{"items":[{"before":"1ed8efa9b35328df6f0e8dbfd27578dcfde58e60","after":"099ff51d459462cccf83361f33b9a59ecf93842e","ref":"refs/heads/master","pushedAt":"2024-03-06T06:32:26.000Z","pushType":"push","commitsCount":2393,"pusher":{"login":"yaox12","name":"Xin Yao","path":"/yaox12","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3831900?s=80&v=4"},"commit":{"message":"torch check the division by zero in batch_norm_update_stats (#120882)\n\nFixes #120803\n\nCo-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>\nPull Request resolved: https://github.com/pytorch/pytorch/pull/120882\nApproved by: https://github.com/CaoE, https://github.com/malfet","shortMessageHtmlLink":"torch check the division by zero in batch_norm_update_stats (pytorch#…"}},{"before":null,"after":"f62aee5d3a315c89e42af74a26c1264127daba3a","ref":"refs/heads/fix_torch_cuda_memory_rst","pushedAt":"2024-01-02T03:24:09.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"yaox12","name":"Xin Yao","path":"/yaox12","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3831900?s=80&v=4"},"commit":{"message":"Update torch_cuda_memory.rst","shortMessageHtmlLink":"Update torch_cuda_memory.rst"}},{"before":"c876afea2d18bf90a860835776e23158fb86e7a3","after":"1ed8efa9b35328df6f0e8dbfd27578dcfde58e60","ref":"refs/heads/master","pushedAt":"2024-01-02T03:19:49.000Z","pushType":"push","commitsCount":5061,"pusher":{"login":"yaox12","name":"Xin Yao","path":"/yaox12","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3831900?s=80&v=4"},"commit":{"message":"[MPS] Speedup addmm (#116548)\n\n- Do not copy bias to output\n- Skip respective multiplication op if either alpha or beta are equal to 1.0\nPull Request resolved: https://github.com/pytorch/pytorch/pull/116548\nApproved by: https://github.com/albanD\nghstack dependencies: #116547","shortMessageHtmlLink":"[MPS] Speedup addmm (pytorch#116548)"}},{"before":"d4dad36cf17f624be765b485a36edf1d8756017c","after":"c876afea2d18bf90a860835776e23158fb86e7a3","ref":"refs/heads/master","pushedAt":"2023-08-09T06:35:03.000Z","pushType":"push","commitsCount":3006,"pusher":{"login":"yaox12","name":"Xin Yao","path":"/yaox12","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3831900?s=80&v=4"},"commit":{"message":"[vision hash update] update the pinned vision hash (#106832)\n\nThis PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).\nUpdate the pinned vision hash.\nPull Request resolved: https://github.com/pytorch/pytorch/pull/106832\nApproved by: https://github.com/pytorchbot","shortMessageHtmlLink":"[vision hash update] update the pinned vision hash (pytorch#106832)"}},{"before":"fe05266fda4f908130dea7cbac37e9264c0429a2","after":"d4dad36cf17f624be765b485a36edf1d8756017c","ref":"refs/heads/master","pushedAt":"2023-05-05T05:14:53.226Z","pushType":"push","commitsCount":2296,"pusher":{"login":"yaox12","name":"Xin Yao","path":"/yaox12","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3831900?s=80&v=4"},"commit":{"message":"[quant][pt2] Improve prepare_qat Conv + BN numerics test (#100271)\n\nSummary: This commit makes two improvements to the existing\ntest for Conv + BN fusion in `prepare_qat_pt2e`:\n\n(1) Test `per_tensor_symmetric` in addition to `per_channel_symmetric`\n(2) Initialize BN stats the same way in both flows. This is\n necessary to get the `per_tensor_symmetric` case to pass.\n\nTest Plan:\npython test/test_quantization.py TestQuantizePT2E.test_prepare_qat_conv_bn_numerics\n\nReviewers: jerryzh168, kimishpatel\n\nDifferential Revision: [D45512851](https://our.internmc.facebook.com/intern/diff/D45512851)\nPull Request resolved: https://github.com/pytorch/pytorch/pull/100271\nApproved by: https://github.com/jerryzh168","shortMessageHtmlLink":"[quant][pt2] Improve prepare_qat Conv + BN numerics test (pytorch#100271"}},{"before":"41c3b41b92f5019f8d5e2f2846a06b87db01ca4e","after":"fe05266fda4f908130dea7cbac37e9264c0429a2","ref":"refs/heads/master","pushedAt":"2023-03-09T03:55:32.957Z","pushType":"push","commitsCount":3025,"pusher":{"login":"yaox12","name":"Xin Yao","path":"/yaox12","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3831900?s=80&v=4"},"commit":{"message":"Revert \"[reland][inductor] Add an AOT compilation mode for Inductor CPP backend (#95985)\"\n\nThis reverts commit deaf9e5e659a1f73656cbbacb39448498e857163.\n\nReverted https://github.com/pytorch/pytorch/pull/95985 on behalf of https://github.com/huydhn due to Sorry for reverting this. It increased the test time significantly for ASAN (and may be other test shards). ASAN tests on PR passed but it was barely not timing out. I have updated my initial findings in https://github.com/pytorch/pytorch/issues/96378","shortMessageHtmlLink":"Revert \"[reland][inductor] Add an AOT compilation mode for Inductor C…"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEDZot8QA","startCursor":null,"endCursor":null}},"title":"Activity · yaox12/pytorch"}