Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Experimental / WIP] Generate a miniscule model for each OP only, retain the results of test inference, and propagate to all OPs. STEP.1,2 #184

Merged
merged 37 commits into from
Feb 14, 2023

Conversation

PINTO0309
Copy link
Owner

@PINTO0309 PINTO0309 commented Feb 11, 2023

1. Content and background

  • [Experimental / WIP] Generate a miniscule model for each OP only, retain the results of test inference, and propagate to all OPs.
    • First step
      • Softmax
      • Sub
      • ReduceMax
      • Reshape
        In the figure below, the output from the first Reshape only model is propagated to ReduceMax and Sub. Next, a ReduceMax only model is generated and test inference is performed based on the inference values propagated from the Reshape model.
      • In other words, it generates models for only one OP each and stores all test inference results for each OP alone.
      • By using a method that sequentially stores inference results in a model with only one OP, it becomes easy to backward from the point where an inference accuracy error is detected and rerun the inference test from the OP where the problem is assumed to have occurred.
        image
    • Second step
      • To attempt to speed up the inference checks in ddnm.onnx, the OPs in the following list will be addressed.
      • Eliminate Softmax's redundant multiple inference accuracy correction process.
      • Implement on a trial basis in the following OPs for verification.
        • Add
        • Concat
        • Conv
        • Cos
        • Gemm
        • InstanceNormalization
        • MatMul
        • Mul
        • Div
        • Mod
        • ReduceMax
        • Reshape
        • Resize
        • Sigmoid
        • Sin
        • Softmax
        • Sub
        • Transpose
        • Unsqueeze
        ssc4onnx -if ddnm.onnx
        ┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
        ┃ OP Type                ┃ OPs        ┃
        ┡━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
        │ Add                    │ 141        │
        │ Concat                 │ 19         │
        │ Conv                   │ 120        │
        │ Cos                    │ 1          │
        │ Gemm                   │ 34         │
        │ InstanceNormalization  │ 71         │
        │ MatMul                 │ 12         │
        │ Mul                    │ 145        │
        │ Reshape                │ 166        │
        │ Resize                 │ 5          │
        │ Sigmoid                │ 67         │
        │ Sin                    │ 1          │
        │ Softmax                │ 6          │
        │ Transpose              │ 12         │
        │ Unsqueeze              │ 65         │
        │ ---------------------- │ ---------- │
        │ Total number of OPs    │ 865        │
        │ ====================== │ ========== │
        │ Model Size             │ 433.8MiB   │
        └────────────────────────┴────────────┘
        
    • Third step - Will be separated into separate pull requests
      • Add a process to perform the calculation of the maximum absolute error on the inference results of all OPs.
    • Fourth step - Will be separated into separate pull requests
      • Implement propagation of inference results for all operations other than the above.
    • Fifth step - Will be separated into separate pull requests
      • Instead of a processing format for inferring TensorFlow models, primitive operations are replaced with Numpy operations to make the process as fast as possible.
    • Sixth step - Will be separated into separate pull requests

2. Summary of corrections

3. Before/After (If there is an operating log that can be used as a reference)

4. Issue number (only if there is a related issue)

Implementation of strict mode #145
[DDNM] Support additional parameters to onnxsim #175

@PINTO0309 PINTO0309 changed the title [Experimental / WIP] Generate a miniscule model for each OP only, retain the results of test inference, and propagate to all OPs. [Experimental / WIP] Generate a miniscule model for each OP only, retain the results of test inference, and propagate to all OPs. STEP.1,2 Feb 14, 2023
@PINTO0309 PINTO0309 merged commit 25d1dab into main Feb 14, 2023
@PINTO0309 PINTO0309 added the Propagate Propagate label Feb 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Propagate Propagate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant