Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Gemm as an operator #47

Merged
merged 3 commits into from
Sep 29, 2017
Merged

Add Gemm as an operator #47

merged 3 commits into from
Sep 29, 2017

Conversation

bddppq
Copy link
Member

@bddppq bddppq commented Sep 22, 2017

Having Gemm would be super helpful for backend optimizing work. Also many popular frameworks have operator similar to Gemm, adding it will be convenient for their frontends to export to onnx.

@ghost
Copy link

ghost commented Sep 22, 2017

Rather than discuss this design choice for GEMM specifically, I continued the discussion started in #24.

@dzhulgakov
Copy link
Member

From offline discussion: add broadcasting to C

@ebarsoum
Copy link
Contributor

Why it isn't experiment initially?

@bddppq
Copy link
Member Author

bddppq commented Sep 27, 2017

@ebarsoum It was experimental initially in this PR, in my most recent commit to this PR I have moved it out of experimental.

@ebarsoum
Copy link
Contributor

@bddppq why we move it out of experiment?

@bddppq
Copy link
Member Author

bddppq commented Sep 27, 2017

@ebarsoum From offline discussion with @yuanbyu @prasanthpul and @dzhulgakov, we will add Gemm (as non-experimental) and remove FC

@ezyang ezyang merged commit 49959d0 into onnx:master Sep 29, 2017
@prasanthpul prasanthpul changed the title Add Gemm as an experimental operator Add Gemm as an operator Feb 14, 2019
pranavm-nvidia pushed a commit to pranavm-nvidia/onnx that referenced this pull request Jan 7, 2020
* Add error checks to file IO in onnx2trt

* Fix handling of auto_pad for opset 7

* Fix handling of Add/Mul broadcasting for opset 7

* Add support for tensor Div/Sub weights

* Prevent squeeze_trailing_dims removing all dims

* Fix shape bugs in combineTensorsElementwise

- This function incorrectly handled weights with rank != tensor_rank,
  particularly with respect to the batch dim, which a given weights
  array may or may not have.
- It also incorrectly attempted to expand the dims of tensors that
  had insufficient rank, which cannot be done due to the batch dim
  always (implicitly) being the left-most dim in TRT.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants