-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use custom larq MLIR dialect for our ops #384
Conversation
@@ -55,7 +87,7 @@ TODO | |||
}]; | |||
|
|||
let arguments = (ins | |||
TensorOf<[F32]>:$input, | |||
TensorOf<[F32, I32, QI8]>:$input, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The correctness of this definition was actually never checked before.
pass_manager->addPass(mlir::TFL::CreateOpRemovalPass()); | ||
pass_manager->addPass( | ||
mlir::TFL::CreatePostQuantizePass(emit_quant_adaptor_ops)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These passes where only needed due to the need to remove dead ops.
patterns.insert<LegalizeToCustomOp<TF::BsignOp>, | ||
LegalizeToCustomOp<TF::Bconv2dOp>, | ||
LegalizeToCustomOp<TF::BMaxPool2dOp>>(ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if there is a nice way to match all ops in a dialect
c642235
to
610af9a
Compare
c7e3b09
to
634b765
Compare
610af9a
to
0fb42de
Compare
911f134
to
e23bdd9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me 👍
e23bdd9
to
19e79a3
Compare
Not that this is not a breaking change since it doesn't change the custom op name in the flatbuffer
This fixes build problems after merging #363
dbcba3f
to
2f6d563
Compare
* Update snapshot tests * Run black
TensorFlow added a
tfl.custom
op to theTFL
dialect in tensorflow/tensorflow@fb7ea8f. This allows us to decouple our ops from theTF
dialect which was previously necessary to allow for flatbuffer serialization.Essentially this PR does two things: It moves our op definition to a custom larq dialect and introduces a legalize pass at the end that translates our ops to
tfl.custom
ops which can be correctly serialized to the TFLite flatbuffer.This has a few advantages over the current approach of adding our ops to the
TF
dialect:I marked this as a draft PR since it is built ontop of #373 and #382
Click to expand autogenerated docs
'lq' Dialect
Types and operations for Larq dialect
This dialect contains operations for Larq. This dialect will be used in
conjunction with the TensorFlow dialects for converting & optimizing
TF graphs to be deployed on Larq Compute Engine.
[TOC]
Operation definition
lq.BMaxPool2d
(TF::BMaxPool2dOp)Binary MaxPool2D op.
Computes a MaxPool2D operation and outputs bitpacked binary values, for consumption by a binary convolution.
Attributes:
padding
stride_width
stride_height
filter_width
filter_height
Operands:
input
Results:
output
lq.Bconv2d
(TF::Bconv2dOp)Computes a 2-D binary convolution by binarizing and bitpacking the input and filter.
TODO
Attributes:
channels_in
dilation_height_factor
dilation_width_factor
fused_activation_function
pad_values
padding
stride_height
stride_width
Operands:
input
filter
post_activation_multiplier
post_activation_bias
Results:
output
lq.Bsign
(TF::BsignOp)Returns an element-wise indication of the binary sign of a number.
y = sign(x) = -1
ifx < 0
; 1 ifx >= 0
.Operands:
x
Results:
y