-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOSA] Add torch.prim.NumToTensor.Scalar float support #1802
Conversation
@eric-k256 @ramiro050 @silvasean is there any possibility to add f64 in tosa. I got it from GPT2/distilGPT2 model torchscript with transformers package 4.25.1. I have no idea how to walk around this unless I downgrade transformer version to 4.21.2 |
beb972a
to
c623f41
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would've expected the e2e test for NumToTensor
to pass on TOSA now
For TOSA, we're looking into adding something like a 'compiler profile' that would allow a wider set of data types, with a corresponding validation pass that would fail networks that are using f64/i64 with devices that don't expect it. If we added this to TOSA, and expanded the types in the dialect, would this be enough for you? Also, are your passes that consume TOSA capable of handling f64? I'd still recommend against f64 unless it is requested by the network developer, as it's likely to be slower on almost every device. |
c623f41
to
ab4fdc6
Compare
What I need is at least if we use cast, it can walk around this issue. For i64 I used to use tosa::cast i64 to i32, then at the end of the rewrite pattern, I use tosa::cast i32 back to i64. It works. But for f64, the cast failed. As you can see in the following:
Bug Output:
|
I guess they pass is because by default it generated f32. So no f64 issue. |
Let's at least enable the f32 for this patch. |
@eric-k256 Is there any possibility to add at least f64 in
For the new hugging face model facebook/opt-125m I am working on. I definitely have no way to walk around the f64 support. I find in TOSA dialect definition, there is I64. Is there any reason that we cannot add F64 just at:
|
I would prefer to start with the smallest possible scope for f64. If we only want to support it for cast, we could create a new type that is used for the inputs and outputs of tosa.cast instead of Tosa_Tensor, it could be something like Tosa_CastableTensor, which included all types from Tosa_Tensor and f64. That would allow us to manage the use of f64, avoiding it for most cases. |
ab4fdc6
to
dde6e4c
Compare
the f64 support of tosa https://reviews.llvm.org/D142599 |
Done. Please review in llvm |
|
1b425f4
to
07c391c
Compare
1245a18
to
b386270
Compare
Since this https://reviews.llvm.org/rGa2dcd994a7f8cc33640f58105276b78acf3483e5 has been merged and update in torch-mlir. This patch should be ok to merged now. @eric-k256 @ramiro050 Please review and approve. |
I am trying to solve a similar problem with f64 vtensor.literals in the dynamo flow. The rewrite already has to convert signed integers to unsigned, pretty simple to also convert f64 to f32. |
Just one line change the tosa::constOp definition to support Tosa_Tensor_Cast f64 should be enough. |
@eric-k256 Please review and pass this https://reviews.llvm.org/D145336 |
1840b15
to
594808e
Compare
https://reviews.llvm.org/D145336 this has been merged and update in torch-mlir. Please review @ramiro050 |
Find this in OPT model : nod-ai/SHARK-Studio#865
This patch needs the f64 support of tosa https://reviews.llvm.org/D142599
For f64 support test file and debug output: https://gist.github.com/AmosLewis/fca6b0d16ee325fcf7ee400459f4fd40
Related issues: #1615