-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create mixed precision integer operations #75
Comments
Hey Steve. I had a few contextual questions to begin.
|
I would add an "ap" (for 'Arbitrary Precision') dialect to circt. This should use regular standard MLIR datatypes (e.g. 'i14' or 'i32'), but the input and output types need to be specified. In contrast, the standard operations assume that the input and outputs all have the same types. In a nice custom operation format, this would end up something like: %1 = ap.add(%a:i14, %b:i12) : i15
MLIR (and LLVM) already support arbitrary precision signed and unsigned types (see above). The trick is to add arbitrary precision operations. See https://mlir.llvm.org/docs/LangRef/#integer-type and https://mlir.llvm.org/doxygen/classmlir_1_1IntegerType.html |
Ok so after doing some poking around, We want to do something similar to that, with two changes mentioned above:
Can you explain what you mean by "the types to be specified"? I understand what you mean at the English level, but having some trouble understanding how this is done within LLVM/MLIR. Maybe an example of it done elsewhere is better fit for this. |
Sure. Looking here: https://github.com/llvm/llvm-project/blob/master/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td |
It's not clear to me that we want the basic operations like What is the downside to making the extensions explicit in the IR? I understand that you wouldn't want that in a human exposed language, but the constraints on the IR are a bit different. |
Our experience building HLS with the LLVM IR is that the extensions
everywhere are annoying and make code more difficult to transform. It may
be a case where the complexity has to be somewhere, but I think that having
mixed width operands overall simplifies the bulk of algorithmic
transformations so that extensions don't have to be included in match
patterns. Below a certain point where optimizations may focus on more
bitwise operations, making the extensions explicit might be more
appropriate. Regardless, it certainly makes sense to have dialects which
are close to the frontend sematics, e.g:
https://github.com/Xilinx/HLS_arbitrary_Precision_Types/blob/master/include/ap_int_base.h
Steve
…On Sat, Sep 12, 2020 at 10:22 AM Chris Lattner ***@***.***> wrote:
It's not clear to me that we want the basic operations like + to take
mixed width operands. This makes analysis and transformation more
difficult, e.g. see the challenges working with the FIRRTL dialect (it
follows FIRRTL design closely, which has this property). A bunch of
transformations get disabled when types don't match as expected, and we've
already had bugs where the check was missed. Here is one random example
<https://github.com/llvm/circt/blob/master/lib/Dialect/FIRRTL/OpFolds.cpp#L50>
.
What is the downside to making the extensions explicit in the IR? I
understand that you wouldn't want that in a human exposed language, but the
constraints on the IR are a bit different.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#75 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADLMZWBBZA56WDQA34Q2WDSFOU5HANCNFSM4QT4JRQA>
.
|
Sure, I agree that frontend-specific dialects make sense. That's the same reason the FIRRTL dialect follows the definition of FIRRTL pretty closely. |
Variadic operations are a good example of when it gets nastier: if you have a 4 input add, each input needs at least 2 bits of sign extension to avoid truncation in the add. If inputs to the add get optimized away then a nonlocal bitwidth analysis is needed to determine that the extensions can be reduced in width. |
Was there ever a consensus made on this? I would like to take a crack at it if so. Otherwise, I'm interested in some small (non-blocking) sub-project to both contribute and continue learning LLVM and MLIR :) |
I would still like to resist this as long as possible. Please don't do this proactively. |
I'm happy to work with you on it, especially if it helps Chris convince me that it's a bad idea. :) |
Sounds good. I'm still interested, I just have university and research priorities to commit to first before taking this on; this may need to come after the semester is over. |
Closing this as "behaves correctly" for the comb dialect. |
HLS often uses arbitrary precision arithmetic. In this model, most operations (e.g. add, multiply) have wider results than their input, in order to represent all the possible result values without overflow. Explicit Truncation operations are necessary in order to limit the growth of values. The rules are roughly:
bitwidth(a + b) = max(bitwidth(a), bitwidth(b)) + 1
bitwidth(a - b) = max(bitwidth(a), bitwidth(b)) + 1
bitwidth(a * b) = bitwidth(a) + bitwidth(b)
bitwidth(a / b) = bitwidth(a)
Most other operations are relatively straightforward. Note that division is often not exact, because only integer results are possible. A bitwidth-inference pass which propagates bitwidths from inputs through such operations, converting standard operations into arbitrary precision operations with explicit truncation would be interesting as well.
The text was updated successfully, but these errors were encountered: