Skip to content

Commit

Permalink
Update code block designations
Browse files Browse the repository at this point in the history
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax
highlighting, while '{.mlir}' was a markdown extension that used a style file
to color the background differently of the code block. The background color
extension was a custom one that we can retire given we have syntax
highlighting.

Also change '```td' to '```tablegen' to match chroma syntax highlighting
designation.

PiperOrigin-RevId: 286222976
  • Loading branch information
jpienaar authored and tensorflower-gardener committed Dec 18, 2019
1 parent 2666b97 commit d7e2cc9
Show file tree
Hide file tree
Showing 14 changed files with 38 additions and 49 deletions.
6 changes: 3 additions & 3 deletions mlir/g3doc/Dialects/SPIR-V.md
Expand Up @@ -105,7 +105,7 @@ array-type ::= `!spv.array<` integer-literal `x` element-type `>`

For example,

```{.mlir}
```mlir
!spv.array<4 x i32>
!spv.array<16 x vector<4 x f32>>
```
Expand Down Expand Up @@ -154,7 +154,7 @@ pointer-type ::= `!spv.ptr<` element-type `,` storage-class `>`

For example,

```{.mlir}
```mlir
!spv.ptr<i32, Function>
!spv.ptr<vector<4 x f32>, Uniform>
```
Expand All @@ -169,7 +169,7 @@ runtime-array-type ::= `!spv.rtarray<` element-type `>`

For example,

```{.mlir}
```mlir
!spv.rtarray<i32>
!spv.rtarray<vector<4 x f32>>
```
Expand Down
2 changes: 1 addition & 1 deletion mlir/g3doc/Dialects/Standard.md
Expand Up @@ -374,7 +374,7 @@ Example:

TODO: This operation is easy to extend to broadcast to dynamically shaped
tensors in the same way dynamically shaped memrefs are handled.
```mlir {.mlir}
```mlir
// Broadcasts %s to a 2-d dynamically shaped tensor, with %m, %n binding
// to the sizes of the two dynamic dimensions.
%m = "foo"() : () -> (index)
Expand Down
6 changes: 3 additions & 3 deletions mlir/g3doc/QuickstartRewrites.md
Expand Up @@ -43,7 +43,7 @@ operations are generated from. To define an operation one needs to specify:
are ignored by the main op and doc generators, but could be used in, say,
the translation from a dialect to another representation.

```td {.td}
```tablegen
def TFL_LeakyReluOp: TFL_Op<TFL_Dialect, "leaky_relu",
[NoSideEffect, SameValueType]>,
Results<(outs Tensor)> {
Expand Down Expand Up @@ -99,7 +99,7 @@ generated.
Let us continue with LeakyRelu. To map from TensorFlow's `LeakyRelu` to
TensorFlow Lite's `LeakyRelu`:
```td {.td}
```tablegen
def : Pat<(TF_LeakyReluOp $arg, F32Attr:$a), (TFL_LeakyReluOp $arg, $a)>
```

Expand All @@ -119,7 +119,7 @@ as destination then one could use a general native code fallback method. This
consists of defining a pattern as well as adding a C++ function to perform the
replacement:

```td {.td}
```tablegen
def createTFLLeakyRelu : NativeCodeCall<
"createTFLLeakyRelu($_builder, $0->getDefiningOp(), $1, $2)">;
Expand Down
4 changes: 2 additions & 2 deletions mlir/g3doc/Traits.md
Expand Up @@ -88,7 +88,7 @@ definition of the trait class. This can be done using the `NativeOpTrait` and
`ParamNativeOpTrait` classes. `ParamNativeOpTrait` provides a mechanism in which
to specify arguments to a parametric trait class with an internal `Impl`.

```td
```tablegen
// The argument is the c++ trait class name.
def MyTrait : NativeOpTrait<"MyTrait">;
Expand All @@ -100,7 +100,7 @@ class MyParametricTrait<int prop>

These can then be used in the `traits` list of an op definition:

```td
```tablegen
def OpWithInferTypeInterfaceOp : Op<...[MyTrait, MyParametricTrait<10>]> { ... }
```

Expand Down
20 changes: 10 additions & 10 deletions mlir/g3doc/Tutorials/Toy/Ch-3.md
Expand Up @@ -36,7 +36,7 @@ def transpose_transpose(x) {

Which corresponds to the following IR:

```MLIR(.mlir)
```mlir
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
%0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
%1 = "toy.transpose"(%0) : (tensor<*xf64>) -> tensor<*xf64>
Expand Down Expand Up @@ -131,7 +131,7 @@ similar way to LLVM:
Finally, we can run `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt` and
observe our pattern in action:

```MLIR(.mlir)
```mlir
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
%0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
"toy.return"(%arg0) : (tensor<*xf64>) -> ()
Expand All @@ -146,13 +146,13 @@ input. The Canonicalizer knows to clean up dead operations; however, MLIR
conservatively assumes that operations may have side-effects. We can fix this by
adding a new trait, `NoSideEffect`, to our `TransposeOp`:

```TableGen(.td):
```tablegen:
def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
```

Let's retry now `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt`:

```MLIR(.mlir)
```mlir
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
"toy.return"(%arg0) : (tensor<*xf64>) -> ()
}
Expand All @@ -169,7 +169,7 @@ Declarative, rule-based pattern-match and rewrite (DRR) is an operation
DAG-based declarative rewriter that provides a table-based syntax for
pattern-match and rewrite rules:

```TableGen(.td):
```tablegen:
class Pattern<
dag sourcePattern, list<dag> resultPatterns,
list<dag> additionalConstraints = [],
Expand All @@ -179,7 +179,7 @@ class Pattern<
A redundant reshape optimization similar to SimplifyRedundantTranspose can be
expressed more simply using DRR as follows:

```TableGen(.td):
```tablegen:
// Reshape(Reshape(x)) = Reshape(x)
def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)),
(ReshapeOp $arg)>;
Expand All @@ -193,7 +193,7 @@ transformation is conditional on some properties of the arguments and results.
An example is a transformation that eliminates reshapes when they are redundant,
i.e. when the input and output shapes are identical.

```TableGen(.td):
```tablegen:
def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>;
def RedundantReshapeOptPattern : Pat<
(ReshapeOp:$res $arg), (replaceWithValue $arg),
Expand All @@ -207,7 +207,7 @@ C++. An example of such an optimization is FoldConstantReshape, where we
optimize Reshape of a constant value by reshaping the constant in place and
eliminating the reshape operation.

```TableGen(.td):
```tablegen:
def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">;
def FoldConstantReshapeOptPattern : Pat<
(ReshapeOp:$res (ConstantOp $arg)),
Expand All @@ -226,7 +226,7 @@ def main() {
}
```

```MLIR(.mlir)
```mlir
module {
func @main() {
%0 = "toy.constant"() {value = dense<[1.000000e+00, 2.000000e+00]> : tensor<2xf64>}
Expand All @@ -243,7 +243,7 @@ module {
We can try to run `toyc-ch3 test/trivialReshape.toy -emit=mlir -opt` and observe
our pattern in action:

```MLIR(.mlir)
```mlir
module {
func @main() {
%0 = "toy.constant"() {value = dense<[[1.000000e+00], [2.000000e+00]]> \
Expand Down
10 changes: 5 additions & 5 deletions mlir/g3doc/Tutorials/Toy/Ch-4.md
Expand Up @@ -107,7 +107,7 @@ and core to a single operation. The interface that we will be adding here is the
To add this interface we just need to include the definition into our operation
specification file (`Ops.td`):

```.td
```tablegen
#ifdef MLIR_CALLINTERFACES
#else
include "mlir/Analysis/CallInterfaces.td"
Expand All @@ -116,7 +116,7 @@ include "mlir/Analysis/CallInterfaces.td"

and add it to the traits list of `GenericCallOp`:

```.td
```tablegen
def GenericCallOp : Toy_Op<"generic_call",
[DeclareOpInterfaceMethods<CallOpInterface>]> {
...
Expand Down Expand Up @@ -176,7 +176,7 @@ the inliner expects an explicit cast operation to be inserted. For this, we need
to add a new operation to the Toy dialect, `ToyCastOp`(toy.cast), to represent
casts between two different shapes.

```.td
```tablegen
def CastOp : Toy_Op<"cast", [NoSideEffect, SameOperandsAndResultShape]> {
let summary = "shape cast operation";
let description = [{
Expand Down Expand Up @@ -263,7 +263,7 @@ to be given to the generated C++ interface class as a template argument. For our
purposes, we will name the generated class a simpler `ShapeInference`. We also
provide a description for the interface.

```.td
```tablegen
def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
let description = [{
Interface to access a registered method to infer the return types for an
Expand All @@ -279,7 +279,7 @@ the need. See the
[ODS documentation](../../OpDefinitions.md#operation-interfaces) for more
information.

```.td
```tablegen
def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
let description = [{
Interface to access a registered method to infer the return types for an
Expand Down
2 changes: 1 addition & 1 deletion mlir/g3doc/Tutorials/Toy/Ch-5.md
Expand Up @@ -237,7 +237,7 @@ def PrintOp : Toy_Op<"print"> {

Looking back at our current working example:

```.mlir
```mlir
func @main() {
%0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
%2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
Expand Down
4 changes: 2 additions & 2 deletions mlir/g3doc/Tutorials/Toy/Ch-6.md
Expand Up @@ -113,7 +113,7 @@ that only legal operations will remain after the conversion.

Looking back at our current working example:

```.mlir
```mlir
func @main() {
%0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
%2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
Expand All @@ -125,7 +125,7 @@ func @main() {

We can now lower down to the LLVM dialect, which produces the following code:

```.mlir
```mlir
llvm.func @free(!llvm<"i8*">)
llvm.func @printf(!llvm<"i8*">, ...) -> !llvm.i32
llvm.func @malloc(!llvm.i64) -> !llvm<"i8*">
Expand Down
4 changes: 2 additions & 2 deletions mlir/g3doc/Tutorials/Toy/Ch-7.md
Expand Up @@ -358,7 +358,7 @@ A few of our existing operations will need to be updated to handle `StructType`.
The first step is to make the ODS framework aware of our Type so that we can use
it in the operation definitions. A simple example is shown below:

```td
```tablegen
// Provide a definition for the Toy StructType for use in ODS. This allows for
// using StructType in a similar way to Tensor or MemRef.
def Toy_StructType :
Expand All @@ -371,7 +371,7 @@ def Toy_Type : AnyTypeOf<[F64Tensor, Toy_StructType]>;
We can then update our operations, e.g. `ReturnOp`, to also accept the
`Toy_StructType`:

```td
```tablegen
def ReturnOp : Toy_Op<"return", [Terminator, HasParent<"FuncOp">]> {
...
let arguments = (ins Variadic<Toy_Type>:$input);
Expand Down
11 changes: 0 additions & 11 deletions mlir/g3doc/includes/style.css

This file was deleted.

2 changes: 1 addition & 1 deletion mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.h
Expand Up @@ -67,7 +67,7 @@ std::string generateLibraryCallName(Operation *op);
/// `A(i, k) * B(k, j) -> C(i, j)` will have the following, ordered, list of
/// affine maps:
///
/// ```{.mlir}
/// ```mlir
/// (
/// (i, j, k) -> (i, k),
/// (i, j, k) -> (k, j),
Expand Down
2 changes: 1 addition & 1 deletion mlir/include/mlir/Dialect/Linalg/IR/LinalgTypes.h
Expand Up @@ -46,7 +46,7 @@ class LinalgDialect : public Dialect {
/// It is constructed by calling the linalg.range op with three values index of
/// index type:
///
/// ```{.mlir}
/// ```mlir
/// func @foo(%arg0 : index, %arg1 : index, %arg2 : index) {
/// %0 = linalg.range %arg0:%arg1:%arg2 : !linalg.range
/// }
Expand Down
12 changes: 6 additions & 6 deletions mlir/include/mlir/IR/AffineMap.h
Expand Up @@ -180,27 +180,27 @@ AffineMap simplifyAffineMap(AffineMap map);
///
/// Example 1:
///
/// ```{.mlir}
/// ```mlir
/// (d0, d1, d2) -> (d1, d1, d0, d2, d1, d2, d1, d0)
/// 0 2 3
/// ```
///
/// returns:
///
/// ```{.mlir}
/// ```mlir
/// (d0, d1, d2, d3, d4, d5, d6, d7) -> (d2, d0, d3)
/// ```
///
/// Example 2:
///
/// ```{.mlir}
/// ```mlir
/// (d0, d1, d2) -> (d1, d0 + d1, d0, d2, d1, d2, d1, d0)
/// 0 2 3
/// ```
///
/// returns:
///
/// ```{.mlir}
/// ```mlir
/// (d0, d1, d2, d3, d4, d5, d6, d7) -> (d2, d0, d3)
/// ```
AffineMap inversePermutation(AffineMap map);
Expand All @@ -214,7 +214,7 @@ AffineMap inversePermutation(AffineMap map);
/// Example:
/// When applied to the following list of 3 affine maps,
///
/// ```{.mlir}
/// ```mlir
/// {
/// (i, j, k) -> (i, k),
/// (i, j, k) -> (k, j),
Expand All @@ -224,7 +224,7 @@ AffineMap inversePermutation(AffineMap map);
///
/// Returns the map:
///
/// ```{.mlir}
/// ```mlir
/// (i, j, k) -> (i, k, k, j, i, j)
/// ```
AffineMap concatAffineMaps(ArrayRef<AffineMap> maps);
Expand Down
2 changes: 1 addition & 1 deletion mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
Expand Up @@ -512,7 +512,7 @@ static LogicalResult verify(YieldOp op) {

// A LinalgLibraryOp prints as:
//
// ```{.mlir}
// ```mlir
// concrete_op_name (ssa-inputs, ssa-outputs) : view-types
// ```
//
Expand Down

0 comments on commit d7e2cc9

Please sign in to comment.