Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream sharpened GraphIR #64

Merged
merged 5 commits into from
Nov 21, 2022
Merged

Upstream sharpened GraphIR #64

merged 5 commits into from
Nov 21, 2022

Conversation

ganler
Copy link
Member

@ganler ganler commented Nov 20, 2022

  • Sharpened GraphIR to support various mutations and multiple return values.
    • SSA (ack LLVM IR and ONNX)
    • Allow multiple return values (ack ONNX) but bound to instructions. (return value identifier is ${instruction_id}.{index}).
    • Mutation: IR-safety oriented mutation APIs: 1) replace use; 2) remove unused; 3) insert; so that given a valid GraphIR each operation should return an also valid GraphIR;
    • IR Repair: allow partial validity: the topology order and use-def chain of a GraphIR can be broken but later fixed by GraphIR.wellform_repair. This is designed for some longer-context IR mutation;
    • Integration of GraphIR to all existing pipelines is still under development in experimental repo/branches and will be upstreamed once found steady.
  • Other minor improvements: add a few data type supports (still not used in generation tho) to allow NNReduce to debug quantized models.

@ganler
Copy link
Member Author

ganler commented Nov 21, 2022

The CI failure is due to two bugs in TVM-10. Sent a PR in apache/tvm#13448.

Maybe type tester should be isolated in individual processes for being able to catch segfaults.

@ganler
Copy link
Member Author

ganler commented Nov 21, 2022

Temporarily bypassed with downgrading the CI requirements to tvm-0.9.0.

@ganler ganler merged commit 9e809b6 into main Nov 21, 2022
@ganler ganler deleted the gir branch November 21, 2022 00:38
junrushao pushed a commit to apache/tvm that referenced this pull request Nov 21, 2022
…n.matmul check (#13448)

This PR brings 2 bug fixes:
1. ONNX converter for matmul: ONNX matmul follows NumPy [rules](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html):
> If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
> If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.

The (my) previous fix #11174 did not consider the second rule (append 1 dimension for the rhs vector).

2. Relay's `nn.matmul` takes 2-D matrices and the checker was removed in a recent PR #13287. This PR puts the checker back to prevent process crashes (make it a readable TVMError) for readability (and also for that the CI in ise-uiuc/nnsmith#64 won't be terminated while using TVM-10).
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…n.matmul check (apache#13448)

This PR brings 2 bug fixes:
1. ONNX converter for matmul: ONNX matmul follows NumPy [rules](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html):
> If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
> If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.

The (my) previous fix apache#11174 did not consider the second rule (append 1 dimension for the rhs vector).

2. Relay's `nn.matmul` takes 2-D matrices and the checker was removed in a recent PR apache#13287. This PR puts the checker back to prevent process crashes (make it a readable TVMError) for readability (and also for that the CI in ise-uiuc/nnsmith#64 won't be terminated while using TVM-10).
@ganler ganler mentioned this pull request Dec 30, 2022
21 tasks
@ganler ganler mentioned this pull request Feb 19, 2023
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant