Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use custom larq MLIR dialect for our ops #384

Merged
merged 13 commits into from
Jun 9, 2020
Merged

Conversation

lgeiger
Copy link
Member

@lgeiger lgeiger commented May 28, 2020

TensorFlow added a tfl.custom op to the TFL dialect in tensorflow/tensorflow@fb7ea8f. This allows us to decouple our ops from the TF dialect which was previously necessary to allow for flatbuffer serialization.

Essentially this PR does two things: It moves our op definition to a custom larq dialect and introduces a legalize pass at the end that translates our ops to tfl.custom ops which can be correctly serialized to the TFLite flatbuffer.

This has a few advantages over the current approach of adding our ops to the TF dialect:

  • Full compatibility with MLIR passes and patterns
    • Proper type checking in the IR and during transformation
      • should prevent bugs that could lead to invalid flatbuffers
      • will allow us to introduce strict verification of the correcness of the IR
    • Proper handling of MLIR traits which
      • MLIR can properly check if ops have side effects and remove trailing ones without the need for custom op cleanup patterns
      • will allow us to potentially use quantization traits in the future if necessary for easier customization of bias quantizations
  • Full flexibility of attribute sersialization and deserialization (If really necessary we could even get rid of flexbuffers and roll with our own attribute serialization)
  • Autogenerated docs for our IR

I marked this as a draft PR since it is built ontop of #373 and #382

Click to expand autogenerated docs

'lq' Dialect

Types and operations for Larq dialect

This dialect contains operations for Larq. This dialect will be used in
conjunction with the TensorFlow dialects for converting & optimizing
TF graphs to be deployed on Larq Compute Engine.

[TOC]

Operation definition

lq.BMaxPool2d (TF::BMaxPool2dOp)

Binary MaxPool2D op.

Computes a MaxPool2D operation and outputs bitpacked binary values, for consumption by a binary convolution.

Attributes:

Attribute MLIR Type Description
padding StringAttr padding enum
stride_width IntegerAttr 32-bit signless integer attribute
stride_height IntegerAttr 32-bit signless integer attribute
filter_width IntegerAttr 32-bit signless integer attribute
filter_height IntegerAttr 32-bit signless integer attribute

Operands:

Operand Description
input tensor of 32-bit float or 32-bit signless integer values

Results:

Result Description
output tensor of 32-bit signless integer values

lq.Bconv2d (TF::Bconv2dOp)

Computes a 2-D binary convolution by binarizing and bitpacking the input and filter.

TODO

Attributes:

Attribute MLIR Type Description
channels_in IntegerAttr 32-bit signless integer attribute
dilation_height_factor IntegerAttr 32-bit signless integer attribute
dilation_width_factor IntegerAttr 32-bit signless integer attribute
fused_activation_function StringAttr fused activation enum
pad_values IntegerAttr 32-bit signless integer attribute
padding StringAttr padding enum
stride_height IntegerAttr 32-bit signless integer attribute
stride_width IntegerAttr 32-bit signless integer attribute

Operands:

Operand Description
input tensor of 32-bit float or 32-bit signless integer or QI8 type values
filter tensor of 32-bit float or 32-bit signless integer values
post_activation_multiplier tensor of 32-bit float values
post_activation_bias tensor of 32-bit float values

Results:

Result Description
output tensor of 32-bit float or 32-bit signless integer or QI8 type values

lq.Bsign (TF::BsignOp)

Returns an element-wise indication of the binary sign of a number.

y = sign(x) = -1 if x < 0; 1 if x >= 0.

Operands:

Operand Description
x tensor of bfloat16 type or 16-bit float or 32-bit float or 64-bit float or 32-bit signless integer or 64-bit signless integer or complex type with 64-bit float elements or complex type with 32-bit float elements values

Results:

Result Description
y tensor of bfloat16 type or 16-bit float or 32-bit float or 64-bit float or 32-bit signless integer or 64-bit signless integer or complex type with 64-bit float elements or complex type with 32-bit float elements values

@lgeiger lgeiger added the internal-improvement Internal Improvements and Maintenance label May 28, 2020
@@ -55,7 +87,7 @@ TODO
}];

let arguments = (ins
TensorOf<[F32]>:$input,
TensorOf<[F32, I32, QI8]>:$input,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correctness of this definition was actually never checked before.

Comment on lines -28 to -42
pass_manager->addPass(mlir::TFL::CreateOpRemovalPass());
pass_manager->addPass(
mlir::TFL::CreatePostQuantizePass(emit_quant_adaptor_ops));
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These passes where only needed due to the need to remove dead ops.

larq_compute_engine/mlir/tf_tfl_passes.cc Outdated Show resolved Hide resolved
larq_compute_engine/mlir/transforms/legalize_tflite.cc Outdated Show resolved Hide resolved
Comment on lines +41 to +43
patterns.insert<LegalizeToCustomOp<TF::BsignOp>,
LegalizeToCustomOp<TF::Bconv2dOp>,
LegalizeToCustomOp<TF::BMaxPool2dOp>>(ctx);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if there is a nice way to match all ops in a dialect

@lgeiger lgeiger force-pushed the remove-filter-format-attr branch from c7e3b09 to 634b765 Compare May 29, 2020 09:32
Base automatically changed from remove-filter-format-attr to master May 29, 2020 10:35
@lgeiger lgeiger marked this pull request as ready for review May 29, 2020 12:12
@lgeiger lgeiger force-pushed the lce-dialect-prepare branch 3 times, most recently from 911f134 to e23bdd9 Compare June 5, 2020 11:33
Copy link
Contributor

@AdamHillier AdamHillier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me 👍

This fixes build problems after merging #363
@Tombana Tombana merged commit 91a8a0f into master Jun 9, 2020
@Tombana Tombana deleted the lce-dialect-prepare branch June 9, 2020 19:21
Tombana pushed a commit that referenced this pull request Apr 6, 2021
* Update snapshot tests

* Run black
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
internal-improvement Internal Improvements and Maintenance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants