{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":598368521,"defaultBranch":"main","name":"pytorch","ownerLogin":"ilyasher","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2023-02-07T00:43:08.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/46343317?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1692121560.0","currentOid":""},"activityList":{"items":[{"before":"1cdd23c0db0e828ddad2cd5570752f2ff6a4c76a","after":"6fd6c9e1428b50124804d7e081974d6aaad6bc7f","ref":"refs/heads/fix-export-memory-leak","pushedAt":"2023-08-16T22:33:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Fix memory leak when exporting models\n\nThis commit fixes a memory leak caused by creating a new PyListObject\nusing PyDict_Items() and not releasing that list later. This often\nprevented the entire model from being de-allocated even when all python\nreferences to it have gone out of scope.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Fix memory leak when exporting models"}},{"before":"78fffe890627bc020c9bd82a27a19d042f31695e","after":"a4229690e30d76a9980c71c9eac6fa8e6cf22429","ref":"refs/heads/main","pushedAt":"2023-08-16T22:19:05.000Z","pushType":"push","commitsCount":575,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"Add Some Checks about dim (#107223)\n\nFixes #106769\n\nAs mentioned in [GRUCell](https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html#grucell), `hidden` should have the same dimension as `input`, and the dimension should be either `1D` or `2D`.\n\nAs for other aspects, it has been verified in `C++`, such as the batch of `Input` and `hidden` are the same, `Input`'s Dim1 and `input_size` are the same, `hidden`'s Dim1 and `hidden_size` are the same, etc.\nPull Request resolved: https://github.com/pytorch/pytorch/pull/107223\nApproved by: https://github.com/albanD","shortMessageHtmlLink":"Add Some Checks about dim (pytorch#107223)"}},{"before":null,"after":"1cdd23c0db0e828ddad2cd5570752f2ff6a4c76a","ref":"refs/heads/fix-export-memory-leak","pushedAt":"2023-08-15T17:46:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Fix memory leak when exporting models\n\nThis commit fixes a memory leak caused by creating a new PyListObject\nusing PyDict_Items() and not releasing that list later. This often\nprevented the entire model from being de-allocated even when all python\nreferences to it have gone out of scope.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Fix memory leak when exporting models"}},{"before":"d455d487440d9f4f1bef2f836280f30fb4668100","after":"78fffe890627bc020c9bd82a27a19d042f31695e","ref":"refs/heads/main","pushedAt":"2023-07-27T17:28:11.000Z","pushType":"push","commitsCount":821,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"Bump certifi from 2023.5.7 to 2023.7.22 in /tools/build/bazel (#105983)\n\nBumps [certifi](https://github.com/certifi/python-certifi) from 2023.5.7 to 2023.7.22.\r\n- [Commits](https://github.com/certifi/python-certifi/compare/2023.05.07...2023.07.22)\r\n\r\n---\r\nupdated-dependencies:\r\n- dependency-name: certifi\r\n dependency-type: indirect\r\n...\r\n\r\nSigned-off-by: dependabot[bot] \r\nCo-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>","shortMessageHtmlLink":"Bump certifi from 2023.5.7 to 2023.7.22 in /tools/build/bazel (pytorc…"}},{"before":"be2e32e921095fff06bf0c425a0629f2d11c7121","after":"ffd1fd1ace6f8c0ab6af095e810fcd5d592ee02a","ref":"refs/heads/fix-if-shape-inference","pushedAt":"2023-07-27T17:27:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Perform Shape inference on added \"Cast\" node\n\nThis commit fixes a bug where some \"If\" nodes blocked\nshape inference during the onnx graph building.\n\nIn fixup_onnx_controlflow, a \"Cast\" node is added to conditions\nin \"If\" and \"Loop\" nodes if the condition type is not bool.\n\nThis commit performs shape inference on this new \"Cast\" node\nwhich allows its output to be marked as \"reliable\" in ConstantValueMap\nduring further shape inference. This would have eventually happened\nwhen shape inference is performed on the entire graph, but the inferred\nshapes are also useful to have during onnx graph building, since\nit allows some ops (like Squeeze) to export into simpler subgraphs.\n\nAlso adds a test for this.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Perform Shape inference on added \"Cast\" node"}},{"before":null,"after":"be2e32e921095fff06bf0c425a0629f2d11c7121","ref":"refs/heads/fix-if-shape-inference","pushedAt":"2023-07-27T00:13:50.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Perform Shape inference on added \"Cast\" node\n\nThis commit fixes a bug where some \"If\" nodes blocked\nshape inference during the onnx graph building.\n\nIn fixup_onnx_controlflow, a \"Cast\" node is added to conditions\nin \"If\" and \"Loop\" nodes if the condition type is not bool.\n\nThis commit performs shape inference on this new \"Cast\" node\nwhich allows its output to be marked as \"reliable\" in ConstantValueMap\nduring further shape inference. This would have eventually happened\nwhen shape inference is performed on the entire graph, but the inferred\nshapes are also useful to have during onnx graph building, since\nit allows some ops (like Squeeze) to export into simpler subgraphs.\n\nAlso adds a test for this.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Perform Shape inference on added \"Cast\" node"}},{"before":"0f5e6a5b171527c5f4f3d254fa49bdee0e011c85","after":"84657e0c5a791d95774cde248ec78081b43b21ce","ref":"refs/heads/improve-slice-shape-inference","pushedAt":"2023-07-24T16:41:06.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"pytorchmergebot","name":null,"path":"/pytorchmergebot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/97764156?s=80&v=4"},"commit":{"message":"[ONNX] Improve shape inference for Slice\n\nFor input axes which are not being sliced, set the output\nshape the same as the input shape. Add a new test to cover\nthis case.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Improve shape inference for Slice"}},{"before":"8c9626cf9fd981d45d2ee034c9bd95eda9fe310d","after":"0f5e6a5b171527c5f4f3d254fa49bdee0e011c85","ref":"refs/heads/improve-slice-shape-inference","pushedAt":"2023-07-21T20:44:39.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Improve shape inference for Slice\n\nFor input axes which are not being sliced, set the output\nshape the same as the input shape. Add a new test to cover\nthis case.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Improve shape inference for Slice"}},{"before":null,"after":"8c9626cf9fd981d45d2ee034c9bd95eda9fe310d","ref":"refs/heads/improve-slice-shape-inference","pushedAt":"2023-07-21T18:42:14.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Improve shape inference for Slice\n\nFor input axes which are not being sliced, set the output\nshape the same as the input shape. Add a new test to cover\nthis case.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Improve shape inference for Slice"}},{"before":"de37074e8adc32227b18691efedc7a9a93c2c7d5","after":"51637ed199321cd53f3f0cde979a00f8acc5a8b7","ref":"refs/heads/fix-slice-export","pushedAt":"2023-07-06T18:15:19.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"pytorchmergebot","name":null,"path":"/pytorchmergebot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/97764156?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"7401fe44733ffec085bda012f30021e0316955c8","after":"de37074e8adc32227b18691efedc7a9a93c2c7d5","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-30T22:43:07.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"c33cc55927d7684c3e24e6f0f105992d71d3ea44","after":"7401fe44733ffec085bda012f30021e0316955c8","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-30T20:22:23.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"a2261ec8727c7863b11f9927f3ce309bf327f3f3","after":"c33cc55927d7684c3e24e6f0f105992d71d3ea44","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-30T02:34:37.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"a02c573a8996d5d47585410ceaf81c87104cfd43","after":"d455d487440d9f4f1bef2f836280f30fb4668100","ref":"refs/heads/main","pushedAt":"2023-06-30T00:16:24.000Z","pushType":"push","commitsCount":602,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"Add back in reduce_scatter_tensor_coalesced (#104345)\n\n#104256 erroneously removed the pybind definition for `reduce_scatter_tensor_coalesced` introduced in #103561\n\nThis adds it back in and introduces a test for the API.\n\nTest command:\n```\npytest test/distributed/test_c10d_nccl.py -vsk test_reduce_scatter_tensor_coalesced\n```\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104345\nApproved by: https://github.com/kwen2501","shortMessageHtmlLink":"Add back in reduce_scatter_tensor_coalesced (pytorch#104345)"}},{"before":"e21041740fc6d2683b4ece1173114d734fdaab69","after":"a2261ec8727c7863b11f9927f3ce309bf327f3f3","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-29T21:39:26.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"3564411b4a5aaaee23c1ff598aaeb659a41e27eb","after":"e21041740fc6d2683b4ece1173114d734fdaab69","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-29T21:32:04.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"b8ae0e841153671a0324c1a9dde69a940afbcc29","after":"3564411b4a5aaaee23c1ff598aaeb659a41e27eb","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-29T19:42:24.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"3766d9ddb92a972dcf7c8af19607f7ca54335b55","after":"b8ae0e841153671a0324c1a9dde69a940afbcc29","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-29T00:50:38.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"b7cf4ca3895b9f60914d678308e1a91f0d4e34be","after":"3766d9ddb92a972dcf7c8af19607f7ca54335b55","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-29T00:49:53.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":null,"after":"b7cf4ca3895b9f60914d678308e1a91f0d4e34be","ref":"refs/heads/fix-slice-export","pushedAt":"2023-06-29T00:46:17.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Export dynamic step size for aten::slice()\n\nThis commit improves the export of aten::slice() to ONNX\nin the following ways:\n\n1. The step size can be an input tensor rather than a constant.\n2. Fixes a bug where using a 1-D, 1-element torch tensor as an\nindex created a broken ONNX model.\n\nThis commit also adds tests for the new functionality.\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"[ONNX] Export dynamic step size for aten::slice()"}},{"before":"f04a04111612125e20a053b63201636e399b2be7","after":"2f1813f585755bc0bffc34e0f8078dc48b32e4b5","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-27T00:08:40.858Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"Don't skip 2nd shape inference in export\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"Don't skip 2nd shape inference in export"}},{"before":"f63b2f8374e1846da9be8956aa51623ec700e7f2","after":"f04a04111612125e20a053b63201636e399b2be7","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-27T00:08:23.788Z","pushType":"push","commitsCount":1,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"Don't skip 2nd shape inference in export\n\nSigned-off-by: Ilya Sherstyuk ","shortMessageHtmlLink":"Don't skip 2nd shape inference in export"}},{"before":"4163f77d0a2df618a08ac104484052e4f8154802","after":"251c2b4616b6c4c2647d4f69618b0c3b2016893d","ref":"refs/heads/add-softmax-const-fold","pushedAt":"2023-06-13T16:56:06.431Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"pytorchmergebot","name":null,"path":"/pytorchmergebot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/97764156?s=80&v=4"},"commit":{"message":"[ONNX] Add constant folding for Softmax op\n\nThis commit adds a torch implementation for the ONNX Softmax op,\nwhich allows it to be folded during ONNX export if all its inputs are known.","shortMessageHtmlLink":"[ONNX] Add constant folding for Softmax op"}},{"before":"77a957db7a5b32b2d54b7895a906200bed28261d","after":"f63b2f8374e1846da9be8956aa51623ec700e7f2","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-09T20:26:22.436Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Speed up export of large models\n\nThis commit speeds up the ONNX export of large models\nby making the following changes:\n\n- Remove unecessary memcpy in GetGraphProtoSize\n- In export.cpp, pass around a pointer to the ModelProto\n instead of the ModelProto itself.\n- The shape inference function is very time-consuming,\n so only call it once when exporting (instead of twice).","shortMessageHtmlLink":"[ONNX] Speed up export of large models"}},{"before":"4c9992d5ed07ad7d8ae68e29be733570678ceade","after":"a02c573a8996d5d47585410ceaf81c87104cfd43","ref":"refs/heads/main","pushedAt":"2023-06-09T17:06:01.907Z","pushType":"push","commitsCount":243,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"Record view stacks if running anomaly mode (#103185)\n\nNow, when you do an inplace mutation and the view is naughty, you get this message:\n\n```\nRuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don't want the inplace to be tracked) or both outside (if you want the inplace to be tracked). To find out where this view was allocated, run your entire forward region under anomaly mode (torch.autograd.detect_anomaly(check_nan=False)).\n```\n\nWhen you run under anomaly mode, you get:\n\n```\nRuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don't want the inplace to be tracked) or both outside (if you want the inplace to be tracked). This view was allocated at:\n File \"/data/users/ezyang/c/pytorch/test/test_autograd.py\", line 4299, in arglebargle\n File \"/data/users/ezyang/c/pytorch/test/test_autograd.py\", line 4306, in test_anomaly_gives_view_stack\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/case.py\", line 549, in _callTestMethod\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/case.py\", line 591, in run\n File \"/data/users/ezyang/c/pytorch/torch/testing/_internal/common_utils.py\", line 2266, in _run_with_retry\n File \"/data/users/ezyang/c/pytorch/torch/testing/_internal/common_utils.py\", line 2337, in run\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/case.py\", line 650, in __call__\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/suite.py\", line 122, in run\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/suite.py\", line 84, in __call__\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/suite.py\", line 122, in run\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/suite.py\", line 84, in __call__\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/runner.py\", line 184, in run\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/main.py\", line 271, in runTests\n File \"/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/main.py\", line 101, in __init__\n File \"/data/users/ezyang/c/pytorch/torch/testing/_internal/common_utils.py\", line 894, in run_tests\n File \"/data/users/ezyang/c/pytorch/test/test_autograd.py\", line 11209, in \n```\n\nSigned-off-by: Edward Z. Yang \nPull Request resolved: https://github.com/pytorch/pytorch/pull/103185\nApproved by: https://github.com/zdevito","shortMessageHtmlLink":"Record view stacks if running anomaly mode (pytorch#103185)"}},{"before":"787f1bfe93e296cf021ad14a8aae468f54362f1a","after":"77a957db7a5b32b2d54b7895a906200bed28261d","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-09T17:05:52.872Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Speed up export of large models\n\nThis commit speeds up the ONNX export of large models\nby making the following changes:\n\n- Remove unecessary memcpy in GetGraphProtoSize\n- In export.cpp, pass around a pointer to the ModelProto\n instead of the ModelProto itself.\n- The shape inference function is very time-consuming,\n so only call it once when exporting (instead of twice).","shortMessageHtmlLink":"[ONNX] Speed up export of large models"}},{"before":"e744ee4ae956fd0e380e4a70e08b1ea177b3cbea","after":"787f1bfe93e296cf021ad14a8aae468f54362f1a","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-09T00:54:42.424Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Speed up export of large models\n\nThis commit speeds up the ONNX export of large models\nby making the following changes:\n\n- Remove unecessary memcpy in GetGraphProtoSize\n- In export.cpp, pass around a pointer to the ModelProto\n instead of the ModelProto itself.\n- The shape inference function is very time-consuming,\n so only call it once when exporting (instead of twice).","shortMessageHtmlLink":"[ONNX] Speed up export of large models"}},{"before":"d21d22f14fbb3c506ee7821ca912ad67ffe8aff3","after":"e744ee4ae956fd0e380e4a70e08b1ea177b3cbea","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-08T23:36:11.969Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Speed up export of large models\n\nThis commit speeds up the ONNX export of large models\nby making the following changes:\n\n- Remove unecessary memcpy in GetGraphProtoSize\n- In export.cpp, pass around a pointer to the ModelProto\n instead of the ModelProto itself.\n- The shape inference function is very time-consuming,\n so only call it once when exporting (instead of twice).","shortMessageHtmlLink":"[ONNX] Speed up export of large models"}},{"before":null,"after":"d21d22f14fbb3c506ee7821ca912ad67ffe8aff3","ref":"refs/heads/speed-up-onnx-export","pushedAt":"2023-06-08T23:15:11.022Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"fixes","shortMessageHtmlLink":"fixes"}},{"before":null,"after":"4163f77d0a2df618a08ac104484052e4f8154802","ref":"refs/heads/add-softmax-const-fold","pushedAt":"2023-06-02T18:30:50.322Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ilyasher","name":"Ilya Sherstyuk","path":"/ilyasher","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/46343317?s=80&v=4"},"commit":{"message":"[ONNX] Add constant folding for Softmax op\n\nThis commit adds a torch implementation for the ONNX Softmax op,\nwhich allows it to be folded during ONNX export if all its inputs are known.","shortMessageHtmlLink":"[ONNX] Add constant folding for Softmax op"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAADbEaSUgA","startCursor":null,"endCursor":null}},"title":"Activity · ilyasher/pytorch"}