-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Update 'Functionalize' pass to support pre-decomp graph; Drop 'aten_graph' arg for 'DynamoExporter' #99667
Commits on Apr 20, 2023
-
[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'
[ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 9962e88 - Browse repository at this point
Copy the full SHA 9962e88View commit details
Commits on Apr 21, 2023
-
Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'"
Summary - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing. That argument will be dropped by #99555. - Later decomposition pass will do graph lowering, so this step is duplicated. - Functionalization currently cannot work properly on aten level graph. So it must happen before lowering & decompositions. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 971e75c - Browse repository at this point
Copy the full SHA 971e75cView commit details -
Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'"
Summary - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing. That argument will be dropped by #99555. - Later decomposition pass will do graph lowering, so this step is duplicated. - Functionalization currently cannot work properly on aten level graph. So it must happen before lowering & decompositions. - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version. These ops are created by aten graph lowering and decomposition post functionalization. They won't be doing any real mutation as it is expected to have been handled by functionalization. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for dc46e34 - Browse repository at this point
Copy the full SHA dc46e34View commit details -
Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'"
Summary - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing. That argument will be dropped by #99555. - Later decomposition pass will do graph lowering, so this step is duplicated. - Functionalization currently cannot work properly on aten level graph. So it must happen before lowering & decompositions. - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version. These ops are created by aten graph lowering and decomposition post functionalization. They won't be doing any real mutation as it is expected to have been handled by functionalization. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for b3b9bef - Browse repository at this point
Copy the full SHA b3b9befView commit details
Commits on Apr 28, 2023
-
Update functionalize pass. Note model train/eval mode topic on "[ONNX…
…] Drop 'aten_graph' arg for 'DynamoExporter'" Summary - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing. That argument will be dropped by #99555. - Later decomposition pass will do graph lowering, so this step is duplicated. - Functionalization currently cannot work properly on aten level graph. So it must happen before lowering & decompositions. - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version. These ops are created by aten graph lowering and decomposition post functionalization. They won't be doing any real mutation as it is expected to have been handled by functionalization. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 6910027 - Browse repository at this point
Copy the full SHA 6910027View commit details -
revert back functionalize and decomp order on "[ONNX] Drop 'aten_grap…
…h' arg for 'DynamoExporter'" Summary - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing. That argument will be dropped by #99555. - Later decomposition pass will do graph lowering, so this step is duplicated. - Functionalization currently cannot work properly on aten level graph. So it must happen before lowering & decompositions. - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version. These ops are created by aten graph lowering and decomposition post functionalization. They won't be doing any real mutation as it is expected to have been handled by functionalization. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for fe42854 - Browse repository at this point
Copy the full SHA fe42854View commit details
Commits on Apr 29, 2023
-
Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'; Apply w…
…orkaround in 'Functionalize' pass" Summary - Previously this was required by and entangled with `tracing_mode=symbolic` for `dynamic` tracing. That is resolved by #99555 and its follow ups. - Later decomposition pass will do graph lowering, so this step is duplicated. - Updated `Functionalization` to workaround #99774 (comment) Todo - Training vs eval in dynamo_export So we are effectively exporting all models in traning mode by default. But for the sake of this export we are only interested in eval mode. The question is, should we call `model.eval()` in `dynamo_export`? Tests with model containing batch norm fails 'functionalization' in training mode. We are explicitly calling `model.eval()` for these model for now. - Merge decomp and functionalize pass. Both calls into `make_fx`. Merging potentially increases performance. However it is unclear if it will result in different behavior. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 73b370a - Browse repository at this point
Copy the full SHA 73b370aView commit details
Commits on May 1, 2023
-
rebase on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'; Apply w…
…orkaround in 'Functionalize' pass" Summary - Previously this was required by and entangled with `tracing_mode=symbolic` for `dynamic` tracing. That is resolved by #99555 and its follow ups. - Later decomposition pass will do graph lowering, so this step is duplicated. - Updated `Functionalization` to workaround #99774 (comment) Todo - Training vs eval in dynamo_export So we are effectively exporting all models in traning mode by default. But for the sake of this export we are only interested in eval mode. The question is, should we call `model.eval()` in `dynamo_export`? Tests with model containing batch norm fails 'functionalization' in training mode. We are explicitly calling `model.eval()` for these model for now. - Merge decomp and functionalize pass. Both calls into `make_fx`. Merging potentially increases performance. However it is unclear if it will result in different behavior. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 3e3cf02 - Browse repository at this point
Copy the full SHA 3e3cf02View commit details
Commits on May 3, 2023
-
rebase on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'; Apply w…
…orkaround in 'Functionalize' pass" Summary - Previously this was required by and entangled with `tracing_mode=symbolic` for `dynamic` tracing. That is resolved by #99555 and its follow ups. - Later decomposition pass will do graph lowering, so this step is duplicated. - Updated `Functionalization` to workaround #99774 (comment) Todo - Training vs eval in dynamo_export So we are effectively exporting all models in traning mode by default. But for the sake of this export we are only interested in eval mode. The question is, should we call `model.eval()` in `dynamo_export`? Tests with model containing batch norm fails 'functionalization' in training mode. We are explicitly calling `model.eval()` for these model for now. - Merge decomp and functionalize pass. Both calls into `make_fx`. Merging potentially increases performance. However it is unclear if it will result in different behavior. Workaround to unblock #99662. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 3fc55a6 - Browse repository at this point
Copy the full SHA 3fc55a6View commit details