Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Update 'Functionalize' pass to support pre-decomp graph; Drop 'aten_graph' arg for 'DynamoExporter' #99667

Closed
wants to merge 9 commits into from

Commits on Apr 20, 2023

  1. [ONNX] Drop 'aten_graph' arg for 'DynamoExporter'

    [ghstack-poisoned]
    BowenBao committed Apr 20, 2023
    Configuration menu
    Copy the full SHA
    9962e88 View commit details
    Browse the repository at this point in the history

Commits on Apr 21, 2023

  1. Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'"

    Summary
    - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing.
      That argument will be dropped by #99555.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Functionalization currently cannot work properly on aten level graph.
      So it must happen before lowering & decompositions.
    
    [ghstack-poisoned]
    BowenBao committed Apr 21, 2023
    Configuration menu
    Copy the full SHA
    971e75c View commit details
    Browse the repository at this point in the history
  2. Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'"

    Summary
    - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing.
      That argument will be dropped by #99555.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Functionalization currently cannot work properly on aten level graph.
      So it must happen before lowering & decompositions.
    - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version.
      These ops are created by aten graph lowering and decomposition post functionalization. They
      won't be doing any real mutation as it is expected to have been handled by functionalization.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed Apr 21, 2023
    Configuration menu
    Copy the full SHA
    dc46e34 View commit details
    Browse the repository at this point in the history
  3. Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'"

    Summary
    - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing.
      That argument will be dropped by #99555.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Functionalization currently cannot work properly on aten level graph.
      So it must happen before lowering & decompositions.
    - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version.
      These ops are created by aten graph lowering and decomposition post functionalization. They
      won't be doing any real mutation as it is expected to have been handled by functionalization.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed Apr 21, 2023
    Configuration menu
    Copy the full SHA
    b3b9bef View commit details
    Browse the repository at this point in the history

Commits on Apr 28, 2023

  1. Update functionalize pass. Note model train/eval mode topic on "[ONNX…

    …] Drop 'aten_graph' arg for 'DynamoExporter'"
    
    
    Summary
    - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing.
      That argument will be dropped by #99555.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Functionalization currently cannot work properly on aten level graph.
      So it must happen before lowering & decompositions.
    - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version.
      These ops are created by aten graph lowering and decomposition post functionalization. They
      won't be doing any real mutation as it is expected to have been handled by functionalization.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed Apr 28, 2023
    Configuration menu
    Copy the full SHA
    6910027 View commit details
    Browse the repository at this point in the history
  2. revert back functionalize and decomp order on "[ONNX] Drop 'aten_grap…

    …h' arg for 'DynamoExporter'"
    
    
    Summary
    - Previously this was required by `tracing_mode=symbolic` for `dynamic` tracing.
      That argument will be dropped by #99555.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Functionalization currently cannot work properly on aten level graph.
      So it must happen before lowering & decompositions.
    - Introduce `ReplaceInplacePostFunctionalization` pass to replace inplace variant ops with outplace version.
      These ops are created by aten graph lowering and decomposition post functionalization. They
      won't be doing any real mutation as it is expected to have been handled by functionalization.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed Apr 28, 2023
    Configuration menu
    Copy the full SHA
    fe42854 View commit details
    Browse the repository at this point in the history

Commits on Apr 29, 2023

  1. Update on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'; Apply w…

    …orkaround in 'Functionalize' pass"
    
    
    Summary
    - Previously this was required by and entangled with `tracing_mode=symbolic` for `dynamic` tracing.
      That is resolved by #99555 and its follow ups.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Updated `Functionalization` to workaround #99774 (comment)
    
    Todo
    - Training vs eval in dynamo_export
      So we are effectively exporting all models in traning mode by
      default. But for the sake of this export we are only interested in eval mode.
      The question is, should we call `model.eval()` in `dynamo_export`?
      Tests with model containing batch norm fails 'functionalization' in training mode.
      We are explicitly calling `model.eval()` for these model for now.
    - Merge decomp and functionalize pass. Both calls into `make_fx`.
      Merging potentially increases performance. However it is unclear
      if it will result in different behavior.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed Apr 29, 2023
    Configuration menu
    Copy the full SHA
    73b370a View commit details
    Browse the repository at this point in the history

Commits on May 1, 2023

  1. rebase on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'; Apply w…

    …orkaround in 'Functionalize' pass"
    
    
    Summary
    - Previously this was required by and entangled with `tracing_mode=symbolic` for `dynamic` tracing.
      That is resolved by #99555 and its follow ups.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Updated `Functionalization` to workaround #99774 (comment)
    
    Todo
    - Training vs eval in dynamo_export
      So we are effectively exporting all models in traning mode by
      default. But for the sake of this export we are only interested in eval mode.
      The question is, should we call `model.eval()` in `dynamo_export`?
      Tests with model containing batch norm fails 'functionalization' in training mode.
      We are explicitly calling `model.eval()` for these model for now.
    - Merge decomp and functionalize pass. Both calls into `make_fx`.
      Merging potentially increases performance. However it is unclear
      if it will result in different behavior.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed May 1, 2023
    Configuration menu
    Copy the full SHA
    3e3cf02 View commit details
    Browse the repository at this point in the history

Commits on May 3, 2023

  1. rebase on "[ONNX] Drop 'aten_graph' arg for 'DynamoExporter'; Apply w…

    …orkaround in 'Functionalize' pass"
    
    
    Summary
    - Previously this was required by and entangled with `tracing_mode=symbolic` for `dynamic` tracing.
      That is resolved by #99555 and its follow ups.
    - Later decomposition pass will do graph lowering, so this step is duplicated.
    - Updated `Functionalization` to workaround #99774 (comment)
    
    Todo
    - Training vs eval in dynamo_export
      So we are effectively exporting all models in traning mode by
      default. But for the sake of this export we are only interested in eval mode.
      The question is, should we call `model.eval()` in `dynamo_export`?
      Tests with model containing batch norm fails 'functionalization' in training mode.
      We are explicitly calling `model.eval()` for these model for now.
    - Merge decomp and functionalize pass. Both calls into `make_fx`.
      Merging potentially increases performance. However it is unclear
      if it will result in different behavior.
    
    Workaround to unblock #99662.
    
    [ghstack-poisoned]
    BowenBao committed May 3, 2023
    Configuration menu
    Copy the full SHA
    3fc55a6 View commit details
    Browse the repository at this point in the history