{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":416522009,"defaultBranch":"master","name":"pytorch","ownerLogin":"andrewor14","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2021-10-12T22:57:14.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/2133137?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1696259510.0","currentOid":""},"activityList":{"items":[{"before":"e6c8c2653a9f9be74d9e0c809e4419dba66229c3","after":"6a8b0e964bd45c4e7092b35dd574677263730cef","ref":"refs/heads/export-D49805849","pushedAt":"2023-10-05T01:48:27.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch blocker for SN HTP model (#110392)\n\nSummary:\n\nD49187352 caused our model conversion and loading of QAT checkpoint to be stuck with thrift time out.\n\nwe are actively checking in final code and model for static quant HTP prod model, and encountered this breakage at head Thursday.\n\nThrift timeout is a not failing, and because of that, it's hard to bisect and find this culprit. It is also hard to set up unit test, because the job simply time-out. Better test is needed to guard downstream model conversion against upstream changes.\n\nOur suspicion of why this diff broke us is that we create a lot of modules with qat (in a recursive manner) but our model is not a qat traceable module (it is a graph with many qat modules and floating point modules). With fuctools.partial as in the original diff, we will be caching modules in the memory and causing the memory of the machine to be taken up completely.\n\nbypass-github-export-checks\n\nTest Plan:\n## Before backout :\nSometimes it shows thrift timeout: P841827755\nSometimes it just hangs and killed my dev server terminal.\n\n## After backout: no issue reloading model P842598166\n\n## Commands\n```\nhg rebase -s D49800326 -d . && hg co D49800326 && \\\nbuck2 run mode/opt deeplearning/projects/pyspeech:jit_static_quant_QAT_model \\\n-- --model_checkpoint_path=manifold://speech_training_v2/tree/mast/f484220747/checkpoint_1_10000.pt --is_per_channel || status=$?\n\n```\n\nReviewed By: junesg\n\nDifferential Revision: D49805849","shortMessageHtmlLink":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch …"}},{"before":"2a62f1424d1eada0d1c8c0d0794856e8bb87202b","after":"e6c8c2653a9f9be74d9e0c809e4419dba66229c3","ref":"refs/heads/export-D49805849","pushedAt":"2023-10-05T01:45:35.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch blocker for SN HTP model (#110392)\n\nSummary:\n\nD49187352 caused our model conversion and loading of QAT checkpoint to be stuck with thrift time out.\n\nwe are actively checking in final code and model for static quant HTP prod model, and encountered this breakage at head Thursday.\n\nThrift timeout is a not failing, and because of that, it's hard to bisect and find this culprit. It is also hard to set up unit test, because the job simply time-out. Better test is needed to guard downstream model conversion against upstream changes.\n\nOur suspicion of why this diff broke us is that we create a lot of modules with qat (in a recursive manner) but our model is not a qat traceable module (it is a graph with many qat modules and floating point modules). With fuctools.partial as in the original diff, we will be caching modules in the memory and causing the memory of the machine to be taken up completely.\n\nbypass-github-export-checks\n\nTest Plan:\n## Before backout :\nSometimes it shows thrift timeout: P841827755\nSometimes it just hangs and killed my dev server terminal.\n\n## After backout: no issue reloading model P842598166\n\n## Commands\n```\nhg rebase -s D49800326 -d . && hg co D49800326 && \\\nbuck2 run mode/opt deeplearning/projects/pyspeech:jit_static_quant_QAT_model \\\n-- --model_checkpoint_path=manifold://speech_training_v2/tree/mast/f484220747/checkpoint_1_10000.pt --is_per_channel || status=$?\n\n```\n\nReviewed By: junesg\n\nDifferential Revision: D49805849","shortMessageHtmlLink":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch …"}},{"before":"cbbdafb82ad9e8a0d314bcc685ea9c4280950d74","after":"2a62f1424d1eada0d1c8c0d0794856e8bb87202b","ref":"refs/heads/export-D49805849","pushedAt":"2023-10-05T01:41:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch blocker for SN HTP model (#110392)\n\nSummary:\n\nD49187352 caused our model conversion and loading of QAT checkpoint to be stuck with thrift time out.\n\nwe are actively checking in final code and model for static quant HTP prod model, and encountered this breakage at head Thursday.\n\nThrift timeout is a not failing, and because of that, it's hard to bisect and find this culprit. It is also hard to set up unit test, because the job simply time-out. Better test is needed to guard downstream model conversion against upstream changes.\n\nOur suspicion of why this diff broke us is that we create a lot of modules with qat (in a recursive manner) but our model is not a qat traceable module (it is a graph with many qat modules and floating point modules). With fuctools.partial as in the original diff, we will be caching modules in the memory and causing the memory of the machine to be taken up completely.\n\nbypass-github-export-checks\n\nTest Plan:\n## Before backout :\nSometimes it shows thrift timeout: P841827755\nSometimes it just hangs and killed my dev server terminal.\n\n## After backout: no issue reloading model P842598166\n\n## Commands\n```\nhg rebase -s D49800326 -d . && hg co D49800326 && \\\nbuck2 run mode/opt deeplearning/projects/pyspeech:jit_static_quant_QAT_model \\\n-- --model_checkpoint_path=manifold://speech_training_v2/tree/mast/f484220747/checkpoint_1_10000.pt --is_per_channel || status=$?\n\n```\n\nReviewed By: junesg\n\nDifferential Revision: D49805849","shortMessageHtmlLink":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch …"}},{"before":"d904f1c22e633c3162587515447fbb62d73f2f67","after":"cbbdafb82ad9e8a0d314bcc685ea9c4280950d74","ref":"refs/heads/export-D49805849","pushedAt":"2023-10-02T15:19:34.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch blocker for SN HTP model (#110392)\n\nSummary:\n\nD49187352 caused our model conversion and loading of QAT checkpoint to be stuck with thrift time out.\n\nwe are actively checking in final code and model for static quant HTP prod model, and encountered this breakage at head Thursday.\n\nThrift timeout is a not failing, and because of that, it's hard to bisect and find this culprit. It is also hard to set up unit test, because the job simply time-out. Better test is needed to guard downstream model conversion against upstream changes.\n\nOur suspicion of why this diff broke us is that we create a lot of modules with qat (in a recursive manner) but our model is not a qat traceable module (it is a graph with many qat modules and floating point modules). With fuctools.partial as in the original diff, we will be caching modules in the memory and causing the memory of the machine to be taken up completely.\n\nTest Plan:\n## Before backout :\nSometimes it shows thrift timeout: P841827755\nSometimes it just hangs and killed my dev server terminal.\n\n## After backout: no issue reloading model P842598166\n\n## Commands\n```\nhg rebase -s D49800326 -d . && hg co D49800326 && \\\nbuck2 run mode/opt deeplearning/projects/pyspeech:jit_static_quant_QAT_model \\\n-- --model_checkpoint_path=manifold://speech_training_v2/tree/mast/f484220747/checkpoint_1_10000.pt --is_per_channel || status=$?\n\n```\n\nDifferential Revision: D49805849","shortMessageHtmlLink":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch …"}},{"before":"025c23f5fa9a43929e3623ecfa1fd0afd614e9b1","after":"d904f1c22e633c3162587515447fbb62d73f2f67","ref":"refs/heads/export-D49805849","pushedAt":"2023-10-02T15:18:54.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch blocker for SN HTP model (#110392)\n\nSummary:\n\nD49187352 caused our model conversion and loading of QAT checkpoint to be stuck with thrift time out.\n\nwe are actively checking in final code and model for static quant HTP prod model, and encountered this breakage at head Thursday.\n\nThrift timeout is a not failing, and because of that, it's hard to bisect and find this culprit. It is also hard to set up unit test, because the job simply time-out. Better test is needed to guard downstream model conversion against upstream changes.\n\nOur suspicion of why this diff broke us is that we create a lot of modules with qat (in a recursive manner) but our model is not a qat traceable module (it is a graph with many qat modules and floating point modules). With fuctools.partial as in the original diff, we will be caching modules in the memory and causing the memory of the machine to be taken up completely.\n\nTest Plan:\n## Before backout :\nSometimes it shows thrift timeout: P841827755\nSometimes it just hangs and killed my dev server terminal.\n\n## After backout: no issue reloading model P842598166\n\n## Commands\n```\nhg rebase -s D49800326 -d . && hg co D49800326 && \\\nbuck2 run mode/opt deeplearning/projects/pyspeech:jit_static_quant_QAT_model \\\n-- --model_checkpoint_path=manifold://speech_training_v2/tree/mast/f484220747/checkpoint_1_10000.pt --is_per_channel || status=$?\n\n```\n\nDifferential Revision: D49805849","shortMessageHtmlLink":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch …"}},{"before":null,"after":"025c23f5fa9a43929e3623ecfa1fd0afd614e9b1","ref":"refs/heads/export-D49805849","pushedAt":"2023-10-02T15:11:50.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch blocker for SN HTP model\n\nSummary:\nD49187352 caused our model conversion and loading of QAT checkpoint to be stuck with thrift time out.\n\nwe are actively checking in final code and model for static quant HTP prod model, and encountered this breakage at head Thursday.\n\nThrift timeout is a not failing, and because of that, it's hard to bisect and find this culprit. It is also hard to set up unit test, because the job simply time-out. Better test is needed to guard downstream model conversion against upstream changes.\n\nOur suspicion of why this diff broke us is that we create a lot of modules with qat (in a recursive manner) but our model is not a qat traceable module (it is a graph with many qat modules and floating point modules). With fuctools.partial as in the original diff, we will be caching modules in the memory and causing the memory of the machine to be taken up completely.\n\nTest Plan:\n## Before backout :\nSometimes it shows thrift timeout: P841827755\nSometimes it just hangs and killed my dev server terminal.\n\n## After backout: no issue reloading model P842598166\n\n## Commands\n```\nhg rebase -s D49800326 -d . && hg co D49800326 && \\\nbuck2 run mode/opt deeplearning/projects/pyspeech:jit_static_quant_QAT_model \\\n-- --model_checkpoint_path=manifold://speech_training_v2/tree/mast/f484220747/checkpoint_1_10000.pt --is_per_channel || status=$?\n\n```\n\nDifferential Revision: D49805849","shortMessageHtmlLink":"Back out \"Enable pickling model prepared with QAT qconfig\" -- launch …"}},{"before":"a53d9728581f66683eb3e90299a03c7c9213be64","after":"ecf75bb1cb491a366abf1f070775891d0dea67bc","ref":"refs/heads/export-D49097293","pushedAt":"2023-09-08T19:46:37.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix and rename `move_model_to_eval` (#108891)\n\nSummary:\n\nThis commit fixes two silent correctness problems with\nthe current implementation of `move_model_to_eval`:\n\n(1) Previously the user had to manually call `eliminate_dead_code`\nbefore calling `move_model_to_eval`, otherwise the dropout pattern\nwon't actually get eliminated. This is because subgraph rewriter\ncomplains the match is not self-contained, and so silently does\nnot do the replacement.\n\n(2) We wish to error when the user calls `model.train()` or\n`model.eval()` on an exported model. This error is raised\ncorrectly immediately after export today, but no longer raised\nafter the user calls prepare or convert.\n\nWe fix (1) by moving the `eliminate_dead_code` call into\n`move_model_to_eval`, and fix (2) by ensuring the respective\nerrors are thrown after prepare and convert as well.\n\nAdditionally, this commit renames `move_model_to_eval` to\n`move_exported_model_to_eval` to be more explicit.\n\nbypass-github-export-checks\n\nTest Plan:\npython test/test_quantization.py TestQuantizePT2E.test_disallow_eval_train\npython test/test_quantization.py TestQuantizePT2E.test_move_exported_model_to_eval\n\n\nImported from OSS\n\nDifferential Revision: D49097293","shortMessageHtmlLink":"[quant][pt2] Fix and rename move_model_to_eval (pytorch#108891)"}},{"before":"a760a5388d34608a73ec5ceb2117984aab0028ff","after":"a53d9728581f66683eb3e90299a03c7c9213be64","ref":"refs/heads/export-D49097293","pushedAt":"2023-09-08T19:45:56.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix and rename `move_model_to_eval` (#108891)\n\nSummary:\n\nThis commit fixes two silent correctness problems with\nthe current implementation of `move_model_to_eval`:\n\n(1) Previously the user had to manually call `eliminate_dead_code`\nbefore calling `move_model_to_eval`, otherwise the dropout pattern\nwon't actually get eliminated. This is because subgraph rewriter\ncomplains the match is not self-contained, and so silently does\nnot do the replacement.\n\n(2) We wish to error when the user calls `model.train()` or\n`model.eval()` on an exported model. This error is raised\ncorrectly immediately after export today, but no longer raised\nafter the user calls prepare or convert.\n\nWe fix (1) by moving the `eliminate_dead_code` call into\n`move_model_to_eval`, and fix (2) by ensuring the respective\nerrors are thrown after prepare and convert as well.\n\nAdditionally, this commit renames `move_model_to_eval` to\n`move_exported_model_to_eval` to be more explicit.\n\nbypass-github-export-checks\n\nTest Plan:\npython test/test_quantization.py TestQuantizePT2E.test_disallow_eval_train\npython test/test_quantization.py TestQuantizePT2E.test_move_exported_model_to_eval\n\n\nImported from OSS\n\nDifferential Revision: D49097293","shortMessageHtmlLink":"[quant][pt2] Fix and rename move_model_to_eval (pytorch#108891)"}},{"before":"f1e5825971accdf8e2d428b3df3f794fcf96d9dd","after":"a760a5388d34608a73ec5ceb2117984aab0028ff","ref":"refs/heads/export-D49097293","pushedAt":"2023-09-08T19:37:49.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix and rename `move_model_to_eval` (#108891)\n\nSummary:\n\nThis commit fixes two silent correctness problems with\nthe current implementation of `move_model_to_eval`:\n\n(1) Previously the user had to manually call `eliminate_dead_code`\nbefore calling `move_model_to_eval`, otherwise the dropout pattern\nwon't actually get eliminated. This is because subgraph rewriter\ncomplains the match is not self-contained, and so silently does\nnot do the replacement.\n\n(2) We wish to error when the user calls `model.train()` or\n`model.eval()` on an exported model. This error is raised\ncorrectly immediately after export today, but no longer raised\nafter the user calls prepare or convert.\n\nWe fix (1) by moving the `eliminate_dead_code` call into\n`move_model_to_eval`, and fix (2) by ensuring the respective\nerrors are thrown after prepare and convert as well.\n\nAdditionally, this commit renames `move_model_to_eval` to\n`move_exported_model_to_eval` to be more explicit.\n\nbypass-github-export-checks\n\nTest Plan:\npython test/test_quantization.py TestQuantizePT2E.test_disallow_eval_train\npython test/test_quantization.py TestQuantizePT2E.test_move_exported_model_to_eval\n\n\nImported from OSS\n\nDifferential Revision: D49097293","shortMessageHtmlLink":"[quant][pt2] Fix and rename move_model_to_eval (pytorch#108891)"}},{"before":null,"after":"f1e5825971accdf8e2d428b3df3f794fcf96d9dd","ref":"refs/heads/export-D49097293","pushedAt":"2023-09-08T19:36:15.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix and rename `move_model_to_eval`\n\nSummary:\nThis commit fixes two silent correctness problems with\nthe current implementation of `move_model_to_eval`:\n\n(1) Previously the user had to manually call `eliminate_dead_code`\nbefore calling `move_model_to_eval`, otherwise the dropout pattern\nwon't actually get eliminated. This is because subgraph rewriter\ncomplains the match is not self-contained, and so silently does\nnot do the replacement.\n\n(2) We wish to error when the user calls `model.train()` or\n`model.eval()` on an exported model. This error is raised\ncorrectly immediately after export today, but no longer raised\nafter the user calls prepare or convert.\n\nWe fix (1) by moving the `eliminate_dead_code` call into\n`move_model_to_eval`, and fix (2) by ensuring the respective\nerrors are thrown after prepare and convert as well.\n\nAdditionally, this commit renames `move_model_to_eval` to\n`move_exported_model_to_eval` to be more explicit.\n\nbypass-github-export-checks\n\nTest Plan:\npython test/test_quantization.py TestQuantizePT2E.test_disallow_eval_train\npython test/test_quantization.py TestQuantizePT2E.test_move_exported_model_to_eval\n\n\nImported from OSS\n\nDifferential Revision: D49097293","shortMessageHtmlLink":"[quant][pt2] Fix and rename move_model_to_eval"}},{"before":"4f44fd7302279f22e29973ebced48206ba3c3fb3","after":null,"ref":"refs/heads/export-D48607575","pushedAt":"2023-08-24T16:15:50.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"}},{"before":"a6885764572345750867f7ec1f48d034a54238ae","after":"4f44fd7302279f22e29973ebced48206ba3c3fb3","ref":"refs/heads/export-D48607575","pushedAt":"2023-08-23T16:43:41.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"make python decomp for native_batch_norm CompositeImplicitAutograd, remove native_batch_norm from core aten opset (#107791)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/107791\n\n(From Brian Hirsh)\n\nDescription copied from what I put in a comment in this PR: https://github.com/pytorch/pytorch/pull/106329\n\nSo, the slightly-contentious idea behind this PR is that lower in the stack, I updated torch._decomps.get_decomps() to check not only the decomp table to see if a given op has a decomposition available, but to also check the dispatcher for any decomps registered to the CompositeImplicitAutograd key (link: https://github.com/pytorch/pytorch/pull/105865/files#diff-7008e894af47c01ee6b8eb94996363bd6c5a43a061a2c13a472a2f8a9242ad43R190)\n\nThere's one problem though: we don't actually make any hard guarantees that a given key in the dispatcher points does or does not point to a decomposition. We do rely pretty heavily, however, on the fact that everything registered to the CompositeImplicitAutograd key is in fact a decomposition into other ops.\n\nQAT would like this API to faithfully return \"the set of all decomps that would have run if we had traced through the dispatcher\". However, native_batch_norm is an example of an op that has a pre-autograd decomp registered to it (through op.py_impl(), but the decomp is registered directly to the Autograd key instead of being registered to the CompositeImplicitAutograd key.\n\nIf we want to provide a guarantee to QAT that they can programatically access all decomps that would have run during tracing, then we need to make sure that every decomp we register to the Autograd key is also registered to the CompositeImplicitAutograd key.\n\nThis might sound kind of painful (since it requires auditing), but I think in practice this basically only applies to native_batch_norm.\n\nTest Plan: python test/test_decomp.py\n\nDifferential Revision: D48607575\n\nfbshipit-source-id: 965696eab4119f41d46f1ce4ecb1c20cc788545e","shortMessageHtmlLink":"make python decomp for native_batch_norm CompositeImplicitAutograd, r…"}},{"before":null,"after":"a6885764572345750867f7ec1f48d034a54238ae","ref":"refs/heads/export-D48607575","pushedAt":"2023-08-23T16:33:25.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"make python decomp for native_batch_norm CompositeImplicitAutograd, remove native_batch_norm from core aten opset\n\nSummary:\n(From Brian Hirsh)\n\nDescription copied from what I put in a comment in this PR: https://github.com/pytorch/pytorch/pull/106329\n\nSo, the slightly-contentious idea behind this PR is that lower in the stack, I updated torch._decomps.get_decomps() to check not only the decomp table to see if a given op has a decomposition available, but to also check the dispatcher for any decomps registered to the CompositeImplicitAutograd key (link: https://github.com/pytorch/pytorch/pull/105865/files#diff-7008e894af47c01ee6b8eb94996363bd6c5a43a061a2c13a472a2f8a9242ad43R190)\n\nThere's one problem though: we don't actually make any hard guarantees that a given key in the dispatcher points does or does not point to a decomposition. We do rely pretty heavily, however, on the fact that everything registered to the CompositeImplicitAutograd key is in fact a decomposition into other ops.\n\nQAT would like this API to faithfully return \"the set of all decomps that would have run if we had traced through the dispatcher\". However, native_batch_norm is an example of an op that has a pre-autograd decomp registered to it (through op.py_impl(), but the decomp is registered directly to the Autograd key instead of being registered to the CompositeImplicitAutograd key.\n\nIf we want to provide a guarantee to QAT that they can programatically access all decomps that would have run during tracing, then we need to make sure that every decomp we register to the Autograd key is also registered to the CompositeImplicitAutograd key.\n\nThis might sound kind of painful (since it requires auditing), but I think in practice this basically only applies to native_batch_norm.\n\nTest Plan: python test/test_decomp.py\n\nDifferential Revision: D48607575\n\nfbshipit-source-id: db367aecc07cdde7efa1e802b39c85df045fa18c","shortMessageHtmlLink":"make python decomp for native_batch_norm CompositeImplicitAutograd, r…"}},{"before":"7ad12b1bee71357fec99cefce484b392fbd54201","after":"e7593843d515b30c04e2a26892c1683181064bcc","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-10T21:45:29.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: dcf508c0167c1c362410ca860824299c4b35bab6","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"a9879f7e2b6b556739a2ba9770b3f3a212906302","after":"7ad12b1bee71357fec99cefce484b392fbd54201","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-10T21:38:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 4399308a3492c72d5ea1fb51ffbf45100f456440","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"a42884fe27e21d6e018cdb85e1d81a9f4f30e3f5","after":"a9879f7e2b6b556739a2ba9770b3f3a212906302","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-10T20:53:14.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: fcc8d45c55f44825e379e2ed1184e9738d7e7fb1","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"c4892800a285cc17d3bda5e379bcd4f634f8c7af","after":"a42884fe27e21d6e018cdb85e1d81a9f4f30e3f5","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-10T20:46:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 0bb462baebfb61d629cea8e37f05fd06d51bb9c9","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"1229942a672aed2b3521ac78c26a9741850985ca","after":"c4892800a285cc17d3bda5e379bcd4f634f8c7af","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:40:23.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 38a1a027ed8680edc022a4c5a4015c3b5c811438","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"5fc046d940aff1f0142b73458657998cf39d3944","after":"1229942a672aed2b3521ac78c26a9741850985ca","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:33:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 06bdeb260f4377ae3086977ed67dbab8145187b3","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"4712d6ff2a4239c2f4a9fffd7bb646283c0bec90","after":"5fc046d940aff1f0142b73458657998cf39d3944","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:27:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 4ebf29c233bb556d3f1b5278364bb1c4edf8eba4","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"53e5d3238037fd92567173396da89c92aea6cd12","after":"4712d6ff2a4239c2f4a9fffd7bb646283c0bec90","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:21:24.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 1f479525b8c39da3e563dd5b1f3bcfe713322bc5","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"132025583546bfea06fd43a70d720a14390d0c25","after":"53e5d3238037fd92567173396da89c92aea6cd12","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:15:03.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: e58c9e44f5dd46d1d2192902393567d1e8fc53a6","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"386e0d6586fb560fe54a6bb0e692b5da777e0ba5","after":"132025583546bfea06fd43a70d720a14390d0c25","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:08:06.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 762cbe7d81265b0c3adac8283dd16a3554b76b5b","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"c0d8a581891bb3d1203701cf26a569269c52402b","after":"386e0d6586fb560fe54a6bb0e692b5da777e0ba5","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T19:01:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: c96b0d086634024485a64b0be54721a6cac75ad4","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":"96a5d29f891446db9358a64b516e54f30bd5f52a","after":"c0d8a581891bb3d1203701cf26a569269c52402b","ref":"refs/heads/export-D46750343","pushedAt":"2023-07-07T18:56:02.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2 (#104110)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/104110\n\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: d73b04b597dbe548776a824ee9216a929c24b518","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2 (pytorch#104110)"}},{"before":null,"after":"96a5d29f891446db9358a64b516e54f30bd5f52a","ref":"refs/heads/export-D46750343","pushedAt":"2023-06-23T16:54:27.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for mobilenetv2\n\nSummary:\nQAT convert for mobilenetv2 was previously not working\nbecause we incorrectly applied dropout during eval as well as\ntraining. This is because, for exported models, model.eval() does\nnot change the behavior of dropout, unlike models with torch ops.\nThis commit simulates the effects of model.eval() for exported\nmodels as well by replacing the aten dropout pattern before eval.\nAs of this commit, end-to-end QAT numerics now match for\nmobilenetv2 between FX and PT2.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nDifferential Revision: D46750343\n\nfbshipit-source-id: 58b99d554bd721e823b874aad12d7620ecf815b8","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for mobilenetv2"}},{"before":null,"after":"8f24449f5c50b218b6c7ef97512025932779ea58","ref":"refs/heads/export-D46707786","pushedAt":"2023-06-22T21:43:58.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Add prepare QAT test for mobilenetv2\n\nSummary:\nPrepare QAT for mobilenetv2 has matching numerics with\nFX. There were two changes needed to achieve this, however.\nFirst, this commit adds observer sharing for ReLU6, which is\nused extensively throughout this model. Second, in the tests we\nhave to use the same manual seed every time we call the models\nin order to get the same results between FX and PT2. This is\nbecause there is a dropout at the end of the model.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2\n\nReviewed By: kimishpatel\n\nDifferential Revision: D46707786\n\nfbshipit-source-id: 44b92a68d1d13d44cb5d3acd4e7ae7d4dde35f7b","shortMessageHtmlLink":"[quant][pt2] Add prepare QAT test for mobilenetv2"}},{"before":"b64790542701aa0ca000e877d7d187eff18b5921","after":"4aa9c87a1af547e0d2c87153038e7dcde286677e","ref":"refs/heads/export-D46606614","pushedAt":"2023-06-21T16:19:38.001Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Update special qspecs after QAT rewrite (#103970)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/103970\n\nSpecial qspecs like `SharedQuantizationSpec` and\n`DerivedQuantizationSpec` refer to other nodes in the graph.\nHowever, after subgraph rewriting in QAT, the nodes referred\nto in these special qspecs may be replaced by new nodes.\nThis could lead to the following error when inserting\nobservers according to these qspecs:\n\n```\nAssertionError: please make sure only refer to edge or node\nthat has observer/fake_quant inserted: 'getitem' not in\ndict_keys([(arg0, convolution_default_1), (mul_tensor, convolution_default_1), getitem_3])\n```\n\nThis commit fixes this by keeping track of the nodes that\nare replaced during subgraph rewriting in QAT, and using\nthis mapping to update the dangling references used in these\nspecial qspecs.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2E.test_qat_update_shared_qspec\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46606614\n\nfbshipit-source-id: ed86c61007658b23121e98417ac754a48fa62a24","shortMessageHtmlLink":"[quant][pt2] Update special qspecs after QAT rewrite (pytorch#103970)"}},{"before":null,"after":"b64790542701aa0ca000e877d7d187eff18b5921","ref":"refs/heads/export-D46606614","pushedAt":"2023-06-21T16:10:55.672Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Update special qspecs after QAT rewrite\n\nSummary:\nSpecial qspecs like `SharedQuantizationSpec` and\n`DerivedQuantizationSpec` refer to other nodes in the graph.\nHowever, after subgraph rewriting in QAT, the nodes referred\nto in these special qspecs may be replaced by new nodes.\nThis could lead to the following error when inserting\nobservers according to these qspecs:\n\n```\nAssertionError: please make sure only refer to edge or node\nthat has observer/fake_quant inserted: 'getitem' not in\ndict_keys([(arg0, convolution_default_1), (mul_tensor, convolution_default_1), getitem_3])\n```\n\nThis commit fixes this by keeping track of the nodes that\nare replaced during subgraph rewriting in QAT, and using\nthis mapping to update the dangling references used in these\nspecial qspecs.\n\nTest Plan: python test/test_quantization.py TestQuantizePT2E.test_qat_update_shared_qspec\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46606614\n\nfbshipit-source-id: 70e937c5aede94530073c729aaa6994b88141ed3","shortMessageHtmlLink":"[quant][pt2] Update special qspecs after QAT rewrite"}},{"before":"82c39d9ed8b726454b15203e7a656056dcadfbfa","after":"28098bc32a12c43b4caa77e15b3a8560072cfb3f","ref":"refs/heads/export-D46564114","pushedAt":"2023-06-20T16:27:33.836Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"andrewor14","name":null,"path":"/andrewor14","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2133137?s=80&v=4"},"commit":{"message":"[quant][pt2] Fix QAT convert for resnet18 (#103759)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/103759\n\nBefore this commit, only prepare QAT numerics matched\nbetween PT2 and FX for resnet18. Convert numerics diverged,\nhowever, for two reasons:\n\n(1) Existing patterns did not handle inplace ReLUs. This commit\nfixes this by adding extra patterns that use these ReLUs instead\nof the normal ones.\n\n(2) Subgraph rewriter could not handle skip connections in\nquantized models, because the dequantize node is used in both\nthe conv node within the match pattern, and an inplace add node\noutside of the match pattern. This led the subgraph matcher to\nfilter out the match, complaining that it was not self contained.\nThis commit fixes this problem by duplicating the dequantize\nnodes, one for each user, such that subsequent matches will\nbe self contained.\n\nbypass-github-export-checks\n\nTest Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_resnet18\n\nReviewed By: jerryzh168\n\nDifferential Revision: D46564114\n\nfbshipit-source-id: 7527b2d977a1a0b7d64f0c81d21e9cdfad0aeca3","shortMessageHtmlLink":"[quant][pt2] Fix QAT convert for resnet18 (pytorch#103759)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAADj75w_QA","startCursor":null,"endCursor":null}},"title":"Activity · andrewor14/pytorch"}