Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Minor improvements of the compiler docs #12695

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions compiler/angkor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

## Purpose

_angkor_ is a `nncc` core library
_angkor_ is an `nncc` core library

## How to use

_angkor_ implements abstract data type(ADT) for feature, kernel, tensor.
_angkor_ implements abstract data type (ADT) for feature, kernel, tensor.
There are layout, shape information and enumerator and so on.

To use some of these things, just insert `include`!
Expand Down
8 changes: 4 additions & 4 deletions compiler/caffegen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,16 @@
## How caffegen works

Some of commands in `caffegen` use standard input for reading data and standard output for exporting result.
In this case, we strongly recommand you to use pipe, not copy & paste the content of file itself.
In this case, we strongly recommend you to use pipe, not copy & paste the content of file itself.

Otherwise, `caffegen` use arguments to pass some directories.

## Supported command

Basically, caffgen command is used as `caffegen [COMMAND]` and there are four `COMMAND` types.
Basically, caffegen command is used as `caffegen [COMMAND]` and there are four `COMMAND` types.
- init : initialize parameters using prototxt.
- encode : make a binary file(caffemodel) using initialized data
- decode : decode a binary file(caffemodel) and reproduce the initialized data
- encode : make a binary file (caffemodel) using initialized data
- decode : decode a binary file (caffemodel) and reproduce the initialized data
- merge : copy the trained weights from a caffemodel into a prototxt file

## How to use each command
Expand Down
2 changes: 1 addition & 1 deletion compiler/circle2circle/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# circle2circle

_circle2circle_ provides Circle optimizations as executable tool
_circle2circle_ provides Circle optimizations as an executable tool
2 changes: 1 addition & 1 deletion compiler/kuma/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ _kuma_ is a collection of offline memory allocators.

## What does "kuma" mean?

_kuma_ originates from _cooma_ which is an abbreviation of **C**ollection **O**f **O**ffline **M**emory **A**lloators.
_kuma_ originates from _cooma_ which is an abbreviation of **C**ollection **O**f **O**ffline **M**emory **A**llocators.
4 changes: 2 additions & 2 deletions compiler/loco/doc/LEP_000_Dialect_Service.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ struct GraphOutputIndexQueryService : public DialectService

This proposal extends ``Dialect`` class with ``service`` method.

Each dialect SHOULD return a valid pointer on ``service<Service>`` method call if it implements that service. Otherwise, it SHOULD return a null pointer otherwise.
Each dialect SHOULD return a valid pointer on ``service<Service>`` method call if it implements that service. Otherwise, it SHOULD return a null pointer.

**WARNING** It is impossible to use ``get``. ``get`` is currently reserved for singleton accessor.

Expand Down Expand Up @@ -106,7 +106,7 @@ std::vector<loco::Node *> output_nodes(loco::Graph *g)

### How to register a service

Each dialect should invoke protected ``service`` method during its construction.
Each dialect should invoke the protected ``service`` method during its construction.
```cxx
AwesomeDialect::AwesomeDialect()
{
Expand Down
8 changes: 4 additions & 4 deletions compiler/locoex-customop/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# locoex

_locoex_ is an extention of loco. Classes with `COp` prefix enables *Custom Operation*.
_locoex_ is an extention of loco. Classes with the `COp` prefix enable *Custom Operation*.
In this version, a *custom operation* means one of the following:

1. an op that is supported by Tensorflow but not supported both by the moco and the onert
1. an op that is not supported by Tensorflow, moco, and the onert
1. an op that is supported by Tensorflow but not supported both by moco and onert
1. an op that is not supported by Tensorflow, moco or onert

`COpCall` node will represent IR entity that calls custom operations and kernels.
`COpCall` node will represent an IR entity that calls custom operations and kernels.
24 changes: 12 additions & 12 deletions compiler/locomotiv/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
_locomotiv_ is a reference interpreter for _loco_ IR.

# Purpose
- _locomotiv_ would serve as code level specification and reference implementation for loco IR.
- _locomotiv_ would serve as code level specification and a reference implementation for loco IR.
- _locomotiv_ is required for loco-related tools to be tested.

# Sample code to use locomotiv library
Expand Down Expand Up @@ -60,31 +60,31 @@ case loco::DataType::FLOAT32:
4. Test new node execution at `locomotiv/src/Node/TheNode.test.cpp` if possible.

### Note on internal data layout rule
For each domain(see `loco::Domain`), `locomotiv` has fixed layout rule on how to store its data in memory.
For each domain (see `loco::Domain`), `locomotiv` has fixed layout rule on how to store its data in memory.
- Feature is represented as NHWC layout
- That is number of batch(N), height(H), width(W) and channel depth(C)
- That is number of batch (N), height (H), width (W) and channel depth (C)
- Filter is represented as NHWC layout
- That is number of filter(N), height(H), width(W) and input channel depth(C)
- That is number of filter (N), height (H), width (W) and input channel depth (C)
- DepthwiseFilter is represented as HWCM layout
- That is height(H), width(W), input channel depth(C) and depth multiplier(M)
- That is height (H), width (W), input channel depth (C) and depth multiplier (M)
- Matrix is represented as HW layout
- That is height(H), width(W)
- That is height (H), width (W)

### Notes on step 3
- Mocking Tensorflow lite `reference_op.h` might be a good place to start.
- `execute()` can be called multiple time. It just recalculates and updates annotated data. So it should `erase_annot_data()` before newly `annot_data()`.
- `execute()` can be called multiple times. It just recalculates and updates annotated data. So it should `erase_annot_data()` before newly `annot_data()`.
- Most node execution behaviour would be implemented for each data type.
- `execute()` should throw runtime error on invalid cases. Some of these cases are explained:
- Invalid argument node
- e.g.) Pull -> MaxPool2D is invalid as MaxPool2D requires feature map as its argument.
- e.g. Pull -> MaxPool2D is invalid as MaxPool2D requires feature map as its argument.
- Lack of argument data
- e.g.) Given 'Pull -> Push' graph. On execution of Push, if no NodeData annotated to Pull, it is invalid.
- e.g. Given 'Pull -> Push' graph. On execution of Push, if no NodeData annotated to Pull, it is invalid.
- Mismatch of argument shapes
- e.g.) Addition between 2x2 and 3x3 tensor is invalid
- e.g.) MaxPool2D expects its ifm to be 4D feature, otherwise invalid.
- e.g. Addition between 2x2 and 3x3 tensor is invalid
- e.g. MaxPool2D expects its ifm to be 4D feature, otherwise invalid.
- Mismatch between node's own information and inferred information
- Some node already have attributes like shape or data type. If inferred information is different with existing node's, it is invalid.

### Recommendation on step 4 (test)
- If the node has no arguments, create a node object and `NodeExecution::run()` on it. Check whether it operates correctly.
- If the node has N(>= 1) arguments, make N pull node inputs, source them to the node to be tested. FeatureEncode or FilterEncode node may be required inbetween depending on the node's argument type. Then annotate N pull nodes with its data, `NodeExecution::run()` on the node to test, and check whether it operates correctly.
- If the node has N (>= 1) arguments, make N pull node inputs, source them to the node to be tested. FeatureEncode or FilterEncode node may be required inbetween depending on the node's argument type. Then annotate N pull nodes with its data, `NodeExecution::run()` on the node to test, and check whether it operates correctly.
2 changes: 1 addition & 1 deletion compiler/luci/log/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# luci-log

_luci-log_ is a logging framework for _luci_ compiler framework.
_luci-log_ is a logging framework for the _luci_ compiler framework.
2 changes: 1 addition & 1 deletion compiler/luci/logex/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# luci-logex

_luci-logex_ is a extended logging utility for _luci_ compiler framework.
_luci-logex_ is a extended logging utility for the _luci_ compiler framework.
7 changes: 3 additions & 4 deletions compiler/mir/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,16 +21,15 @@ special attributes specific to different operation types.
Mir has a protobuf serializer/deserializer for shapes and tensors (see `mir.proto` schema).

For list of currently supported operations, see `mir/ops/operations.lst.h`.

### How to use
Can be included as a `CMake` target.

### TODO

* Expand serialization
* Add More to readme

### Dependencies

Mir depends on `adtitas` library, which provides the `small_vector` data type.

Mir depends on the `adtitas` library, which provides the `small_vector` data type.
2 changes: 1 addition & 1 deletion compiler/moco-log/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# moco-log

_moco-log_ is a logging framework for _moco_ compiler framework.
_moco-log_ is a logging framework for the _moco_ compiler framework.
4 changes: 2 additions & 2 deletions compiler/moco-tf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ _moco-tf_ translates a TensorFlow model into _loco_

## Purpose

_moco-tf_ is to convert TensorFlow generated model file to in-memory _loco_ IR Graph.
_moco-tf_ converts a TensorFlow generated model file to in-memory _loco_ IR Graph.

## How to use

Expand All @@ -22,7 +22,7 @@ _moco-tf_ is to convert TensorFlow generated model file to in-memory _loco_ IR G

## Dependency

Please refer [requires.cmake](./requires.cmake) for dependant modules.
Please refer to [requires.cmake](./requires.cmake) for dependant modules.

## Naming rules

Expand Down
2 changes: 1 addition & 1 deletion compiler/moco/support/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# support

_support_ privides _moco_ support libraries
_support_ provides _moco_ support libraries
2 changes: 1 addition & 1 deletion compiler/nnc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Neural Network Compiler
### DESCRIPTION

nnc is a neural network compiler that transforms neural networks of various formats into source or machine code.
> At this moment only two NN are supported (MobileNet and InceptionV3) in Tensorflow Lite or Caffe format.
> At this moment, only two NN are supported (MobileNet and InceptionV3) in Tensorflow Lite or Caffe format.

### SYNOPSIS

Expand Down
8 changes: 1 addition & 7 deletions compiler/nnc/utils/model_runner/readme.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,8 @@
# here I write how I run model on my computer

sections:
a) goal of this script
b) examples of code running in author's local machine
c) parametrs and short comment
____
## goal of this script

Here the author has attempted to implement a program capable of running any of the 4 models (caffe, caffe2, tflite, onnx) in a simple and user-friendly manner. The goal of the program is to get the file containing the output of the computation graph at the program output.
_______

## examples of code running in author's local machine
The purpose of the examples below is to demonstrate which arguments and in which order you should use to run this script correctly.
Expand All @@ -32,7 +26,7 @@ $ python model_runner.py -m onnx_runer/model.onnx -i RANDOM.hdf5

------

## parametrs and short comment
## parameters and short comment

-m mean pre learned model which you run
-i mean model's input
Expand Down
2 changes: 1 addition & 1 deletion compiler/onnx2circle/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# onnx2circle

_onnx2circle_ is a ONNX-to-Circle model converter.
_onnx2circle_ is an ONNX-to-Circle model converter.
2 changes: 1 addition & 1 deletion compiler/plier-tf/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# plier-tf

_plier-tf_ is a collection of small tools to handle TensorFlow model.
_plier-tf_ is a collection of small tools to handle TensorFlow models.
8 changes: 4 additions & 4 deletions compiler/tf2tfliteV2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@
_tf2tfliteV2_ is a TensorFlow to TensorFlow Lite model Converter.

## Where does V2 come from?
Even though we alreay have _tf2tflite_, we cannot cover all opeartors in TensorFlow. To expand coverage, we introduce _tf2tfliteV2_ which uses `TensorFlow Lite Converter`(by Google) internally.
Even though we already have _tf2tflite_, we cannot cover all operators in TensorFlow. To expand coverage, we introduce _tf2tfliteV2_ which internally uses `TensorFlow Lite Converter` (by Google).

## Prerequisite
- Frozen graph from TensorFlow 1.13.1 in binary(`*.pb`) or text(`*.pbtxt`) format
- Desired version of TensorFlow(You can use python virtualenv, docker, etc.)
- Frozen graph from TensorFlow 1.13.1 in binary (`*.pb`) or text (`*.pbtxt`) format
- Desired version of TensorFlow (You can use python virtualenv, docker, etc.)

## Example
```
Expand Down Expand Up @@ -42,7 +42,7 @@ python tf2tfliteV2.py \
> --output_arrays=output,output:1,output:2
```

## optional argument
## Optional arguments
```
-h, --help show this help message and exit
--v1 Use TensorFlow Lite Converter 1.x
Expand Down
Loading