Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support missing ONNX ops to import optimum/all-MiniLM-L6-v2 #600

Open
19 of 21 tasks
antimora opened this issue Aug 7, 2023 · 15 comments
Open
19 of 21 tasks

Support missing ONNX ops to import optimum/all-MiniLM-L6-v2 #600

antimora opened this issue Aug 7, 2023 · 15 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed onnx

Comments

@antimora
Copy link
Collaborator

antimora commented Aug 7, 2023

llogiq on Reddit requested support of missing ONNX Ops. We are filing this issue ticket to prioritize these ops

The model: https://huggingface.co/optimum/all-MiniLM-L6-v2/blob/main/model.onnx

The ops used in this model (checked if supported):

  • Add
  • Cast
  • Concat
  • Constant
  • Div
  • Erf
  • Gather
  • Gemm
  • MatMul
  • Mul
  • Pow
  • ReduceMean
  • Reshape
  • Shape
  • Slice
  • Softmax
  • Sqrt
  • Sub
  • Tanh
  • Transpose
  • Unsqueeze

All these ops are implemented in Burn and they now have to be supported by burn-import's op.

@antimora antimora added help wanted Extra attention is needed good first issue Good for newcomers labels Aug 7, 2023
@ShivangRawat30
Copy link

Hey I am currently learning rust, as it is a good first issue can you please assign this to me

@antimora
Copy link
Collaborator Author

antimora commented Aug 7, 2023

@ShivangRawat30, I would recommend you start with a unary operator Sqrt. Work on this and submit a PR. If you agree, just comment claim Sqrt.

If you have questions and want faster response, you can join Discord (you can find the link on Readme).

Anyone else wants to work on any of the ops, you can just comment claim (some operator).

@ShivangRawat30
Copy link

claim Sqrt

@AuruTus
Copy link
Contributor

AuruTus commented Aug 19, 2023

👀 Hi I'm interested in this too. But I'm a bit confused if the new import file should be put under burn-import/src/burn/node/, and the burn impl in burn-tensor crate is what we should refer to, right? CMIIW 😃

@antimora
Copy link
Collaborator Author

@AuruTus, since all these ops already available in Burn, they have to be implemented in burn-import. I would recommend to start with something simple. Maybe with Tanh?

@AuruTus
Copy link
Contributor

AuruTus commented Aug 19, 2023

Thank you for the advice! Will look at it later.

claim Tanh

@AuruTus AuruTus mentioned this issue Aug 24, 2023
1 task
@antimora antimora added the onnx label Aug 29, 2023
@AuruTus
Copy link
Contributor

AuruTus commented Sep 23, 2023

I have free time now and can finish more. Claim erf.

@antimora
Copy link
Collaborator Author

@AuruTus we added onnx file base testing to verify end to end conversion. Let me know if you have questions.

@AuruTus AuruTus mentioned this issue Oct 11, 2023
1 task
@jmintb
Copy link

jmintb commented Oct 18, 2023

Claim Pow

@CohenAriel
Copy link
Contributor

Claim Gather

@CohenAriel
Copy link
Contributor

@antimora I'm having some problems with the codegen test.
Since in torch's gather the index attribute is an int tensor
I get this error

thread 'burn::node::binary::tests::test_binary_codegen_gather' panicked at burn-import/src/burn/node/base.rs:240:9:
assertion failed: `(left == right)`

Diff < left / right > :
<use burn::tensor::Int;
 use burn::{
     module::Module,
     tensor::{backend::Backend, Tensor},
 };
 #[derive(Module, Debug)]
 pub struct Model<B: Backend> {
     phantom: core::marker::PhantomData<B>,
 }
 impl<B: Backend> Model<B> {
     #[allow(unused_variables)]
     pub fn new_with(record: ModelRecord<B>) -> Self {
         Self {
             phantom: core::marker::PhantomData,
         }
     }
     #[allow(clippy::let_and_return)]
     pub fn forward(&self, tensor1: Tensor<B, 2>, tensor2: Tensor<B, 2, Int>) -> Tensor<B, 2> {
         let tensor3 = tensor1.gather(1, tensor2);
         tensor3
     }
 }

@antimora
Copy link
Collaborator Author

antimora commented Nov 5, 2023

@CohenAriel I am not sure what it is being compared to. Is it expected code or generated code?

@CohenAriel
Copy link
Contributor

This is expected code.
Top line is from generated code.
I just realized there are no colors.

The generated code imports Int but the expected code doesn't.

@antimora
Copy link
Collaborator Author

antimora commented Nov 5, 2023

In that can you will need to modify your expected test. You may have to use custom instead of using the macros in place. You should find some examples with int import. I can't point them to you because I am currently on the phone and it is to browse code.

@edmondop
Copy link
Contributor

edmondop commented Dec 4, 2023

I'll grab slice

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed onnx
Projects
None yet
Development

No branches or pull requests

6 participants