Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider adding preferred_element_type to DotGeneralOp/ConvolutionOp #600

Open
burmako opened this issue Nov 24, 2022 · 0 comments
Open
Labels

Comments

@burmako
Copy link
Contributor

burmako commented Nov 24, 2022

One of the requirements for DotGeneralOp/ConvolutionOp is to support different element types for operands and results. E.g. for i8 inputs we might very well want to support i32 outputs.

At the moment, this is implemented in the StableHLO dialect via load-bearing result types, which makes type inference for these ops infeasible. That's kind of fine, but an alternative design would involve specifying the element type of the result via an attribute, which doesn't look like a big imposition on producers and would unlock type inference.

@burmako burmako added the Spec label Nov 24, 2022
This was referenced Dec 10, 2022
sdasgup3 added a commit that referenced this issue Dec 13, 2022
fixes #359 

The PR addresses the followings:
1. Spec of ConvolutionOp
2. Clarify the semantics of `precision_config` : The precision_config
parameter is a array of enums without any constraint on its size. Need
to resolve this.
- update: Added constraints on the parameter. With that the verifier is
in sync with this spec. Also added
#445 for further exploration.
4. Fix
#360 (comment)
5. Avoid disabling clang formatting in StablehloOps.cpp.
6. Address #399

Only missing peice:

The constraint between output feature size and input batch size. Working
on getting a better understanding on this: Done

Type inference should be "revisit" as well because of #600.
burmako pushed a commit that referenced this issue Dec 21, 2022
In their current formulation, these ops cannot have shape functions
because their lhs/rhs element types can be different from their result
element type.

Perhaps in the future we will add the `preferred_element_type` attribute
to these ops (see #600), and then shape functions will be feasible to
write again, but today we have to disable them.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: No status
Development

No branches or pull requests

1 participant