-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[rtl] Add flattening for xor
, or
, add
, mul
.
#121
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend pulling this out into a template like you propose. This gives you the excuse to give the logic a function name, and makes it more appetizing to reuse as new rtl ops are added.
Thanks for driving this!
Ok I've written something along those lines. We could make it slightly simpler by calling |
lib/Dialect/RTL/Ops.cpp
Outdated
/// the original inputs. This is used when flattening in the canonicalization | ||
/// pass. Example: op(1, 2, op(3, 4), 5) -> op(1, 2, 3, 4, 5) | ||
template <typename Inputs> | ||
auto flattenNthInput(const Inputs &inputs, const Inputs &opInputs, size_t N) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought you'd pull the whole "for loop" out, and template the function on the Operation type. I don't think this needs to be templated on Inputs
, since that type should be the same for all of them.
Please mark this function static.
nit: if you keep it this way, please rename 'N' to something like 'splitNo' or something like that. If you keep 'n', please lower case it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about pulling out the entire loop, but then it would require (a) pulling in the rewriter and (b) determining whether it was a success()
or a failure()
. We could simply set the last pass to be flattening, i.e.
LogicalResult matchAndRewrite(AndOp op,
PatternRewriter &rewriter) const override {
...
return flattenNthInput(inputs, rewriter); // returns success() or failure() depending on if there is an op to flatten.
}
However, I didn't want to solidify it as the last pass. I didn't know if there'd be any future issues of ordering, where maybe one pass should come before another for some reason or other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed static and changed name of N
. I also wasn't sure whether a mlir::OperandRange
was a view of a container or the container itself. Looking at other uses, it seems it's passed by value, so I mimicked this.
For this and everything above, if you disagree I will be happy to change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't think something like:
if (tryFlatteningOperands(op, rewriter)) return success();
would work? I would think that tryFlatteningOperands
would be templated on the type of op, e.g. AndOp
in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should work, let me brew something up and I'll request your review.
One thing to note is the amount of duplication in each case. To reduce this, I could do something like:
I'm not sure if that's worth the extra level of abstraction or not, so looking for input.