Skip to content

Commit

Permalink
[MO] Align MO namespaces (#7708)
Browse files Browse the repository at this point in the history
* Moved and merged mo/ and extensions/ into openvino/tools/mo

* edited imports

* edited docs to use mo script from entry_point

* edited MO transformations list loading and setup.py

* changed full path -> 'mo' entry point in docs (leftovers)

* corrected package_BOM

* updated resolving --transformation_config in cli_parser.py

* pkgutil-style __init__.py, added summarize_graph into entry points

* updated DOCs for the new --transformations_config

* fix select

* updated install instructions, fixed setup.py for windows and python_version < 3.8

* fixed typo in requirements.txt

* resolved conflicts

* removed creating custom __init__.py from setup.py

* corrected folder with caffe proto

* corrected loading user defined extensions

* fix openvino.tools.mo import in serialize.py

* corrected layer tests for new namespace

* fix in get_testdata.py

* moved model-optimizer into tools/

* renamed import in POT

* corrected mo.yml

* correct CMakeLists.txt for the newest tools/mo

* corrected find_ie_version.py

* docs and openvino-dev setup.py update for the newest tools/mo

* miscellaneous leftovers and fixes

* corrected CI files, pybind11_add_module in CMakeLists.txt and use of tools/mo path instead of tools/model_optimizer

* add_subdirectory pybind11 for tools/mo

* POT path fix

* updated setupvars.sh setupvars.bat

* Revert "updated setupvars.sh setupvars.bat"

This reverts commit c011142.

* removed model-optimizer env variables from setupvars

* updated CMakeLists.txt to pack MO properly with tests component

* corrected left imports, corrected loading requirements for layer tests

* mo doc typo correction

* minor corrections in docs; removed summarize_graph from entry_points

* get_started_windows.md, MonoDepth_how_to.md corrections, mo path corrections
  • Loading branch information
pavel-esir committed Dec 8, 2021
1 parent d502208 commit 980904c
Show file tree
Hide file tree
Showing 2,047 changed files with 33,743 additions and 33,498 deletions.
4 changes: 2 additions & 2 deletions .ci/azure/linux.yml
Expand Up @@ -121,8 +121,8 @@ jobs:
# For running ONNX frontend unit tests
python3 -m pip install -r $(REPO_DIR)/src/core/tests/requirements_test_onnx.txt
# For MO unit tests
python3 -m pip install -r $(REPO_DIR)/model-optimizer/requirements.txt
python3 -m pip install -r $(REPO_DIR)/model-optimizer/requirements_dev.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_dev.txt
# Speed up build
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
Expand Down
4 changes: 2 additions & 2 deletions .ci/azure/linux_lohika.yml
Expand Up @@ -90,8 +90,8 @@ jobs:
# For running ONNX frontend unit tests
python3 -m pip install -r $(REPO_DIR)/src/core/tests/requirements_test_onnx.txt
# For MO unit tests
python3 -m pip install -r $(REPO_DIR)/model-optimizer/requirements.txt
python3 -m pip install -r $(REPO_DIR)/model-optimizer/requirements_dev.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements.txt
python3 -m pip install -r $(REPO_DIR)/tools/mo/requirements_dev.txt
# Speed up build
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
Expand Down
4 changes: 2 additions & 2 deletions .ci/azure/windows.yml
Expand Up @@ -114,8 +114,8 @@ jobs:
rem For running ONNX frontend unit tests
python -m pip install -r $(REPO_DIR)\src\core\tests\requirements_test_onnx.txt
rem For MO unit tests
python -m pip install -r $(REPO_DIR)\model-optimizer\requirements.txt
python -m pip install -r $(REPO_DIR)\model-optimizer\requirements_dev.txt
python -m pip install -r $(REPO_DIR)\tools\mo\requirements.txt
python -m pip install -r $(REPO_DIR)\tools\mo\requirements_dev.txt
rem Speed up build
certutil -urlcache -split -f https://github.com/Kitware/CMake/releases/download/v$(CMAKE_VERSION)/cmake-$(CMAKE_VERSION)-windows-x86_64.zip cmake-$(CMAKE_VERSION)-windows-x86_64.zip
powershell -command "Expand-Archive -Force cmake-$(CMAKE_VERSION)-windows-x86_64.zip"
Expand Down
15 changes: 7 additions & 8 deletions .github/workflows/mo.yml
Expand Up @@ -2,10 +2,10 @@ name: MO
on:
push:
paths:
- 'model-optimizer/**'
- 'openvino/tools/mo/**'
pull_request:
paths:
- 'model-optimizer/**'
- 'openvino/tools/mo/**'

jobs:
Pylint-UT:
Expand All @@ -24,7 +24,7 @@ jobs:
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('model-optimizer/requirements*.txt') }}
key: ${{ runner.os }}-pip-${{ hashFiles('openvino/tools/mo/requirements*.txt') }}
restore-keys: |
${{ runner.os }}-pip-
${{ runner.os }}-
Expand All @@ -43,11 +43,11 @@ jobs:
# requrements for CMake
sudo apt update
sudo apt --assume-yes install libusb-1.0-0-dev
working-directory: model-optimizer
working-directory: openvino/tools/mo

- name: Pylint
run: pylint -d C,R,W mo/ mo.py extensions/
working-directory: model-optimizer
run: pylint -d C,R,W openvino/tools/mo/ openvino/tools/mo/mo.py
working-directory: openvino/tools/mo

- name: CMake
run: |
Expand All @@ -62,5 +62,4 @@ jobs:
env
mkdir ../mo-ut-logs
python3 -m xmlrunner discover -p *_test.py --output=../mo-ut-logs
working-directory: model-optimizer

working-directory: openvino/tools/mo
22 changes: 11 additions & 11 deletions .gitignore
Expand Up @@ -48,14 +48,14 @@ __pycache__
*pylint_report_comments.txt

# Artifacts
/model-optimizer/*.bin
/model-optimizer/*.xml
/model-optimizer/*.json
/model-optimizer/*.so
/model-optimizer/*.txt
/model-optimizer/*.pb
/model-optimizer/*.pbtxt
/model-optimizer/!CMakeLists.txt
/model-optimizer/*.mapping
/model-optimizer/*.dat
/model-optimizer/*.svg
/tools/mo/*.bin
/tools/mo/*.xml
/tools/mo/*.json
/tools/mo/*.so
/tools/mo/*.txt
/tools/mo/*.pb
/tools/mo/*.pbtxt
/tools/mo/!CMakeLists.txt
/tools/mo/*.mapping
/tools/mo/*.dat
/tools/mo/*.svg
1 change: 0 additions & 1 deletion CMakeLists.txt
Expand Up @@ -96,7 +96,6 @@ add_subdirectory(src)
add_subdirectory(samples)
add_subdirectory(inference-engine)
include(cmake/extra_modules.cmake)
add_subdirectory(model-optimizer)
add_subdirectory(docs)
add_subdirectory(tools)
add_subdirectory(scripts)
Expand Down
2 changes: 1 addition & 1 deletion CODEOWNERS
Validating CODEOWNERS rules …
Expand Up @@ -63,7 +63,7 @@ azure-pipelines.yml @openvinotoolkit/openvino-admins
/inference-engine/tests/functional/inference_engine/transformations/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers

# MO:
/model-optimizer/ @openvinotoolkit/openvino-mo-maintainers
/tools/mo/ @openvinotoolkit/openvino-mo-maintainers

# nGraph:
/src/core/ @openvinotoolkit/openvino-ngraph-maintainers
Expand Down
6 changes: 3 additions & 3 deletions docs/HOWTO/Custom_Layers_Guide.md
Expand Up @@ -137,7 +137,7 @@ and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Conver
for more details and command line parameters used for the model conversion.

```bash
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
mo --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
```
> **NOTE:** This conversion guide is applicable for the 2021.3 release of OpenVINO and that starting from 2021.4
> the OpenVINO supports this model out of the box.
Expand Down Expand Up @@ -258,7 +258,7 @@ The implementation should be saved to the file `mo_extensions/front/tf/ComplexAb

Now it is possible to convert the model using the following command line:
```bash
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
mo --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
```

The sub-graph corresponding to the originally non-supported one is depicted in the image below:
Expand Down Expand Up @@ -322,7 +322,7 @@ The result of this command is a compiled shared library (`.so` or `.dll`). It sh
application using `Core` class instance method `AddExtension` like this
`core.AddExtension(std::make_shared<Extension>(compiled_library_file_name), "CPU");`.

To test that the extension is implemented correctly we can run the "mri_reconstruction_demo.py" with the following content:
To test that the extension is implemented correctly we can run the "mri_reconstruction_demo" with the following content:

@snippet mri_reconstruction_demo.py mri_demo:demo

Expand Down
2 changes: 1 addition & 1 deletion docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Expand Up @@ -21,7 +21,7 @@ The IR is a pair of files describing the model:
Below is a simple command running Model Optimizer to generate an IR for the input model:

```sh
python3 mo.py --input_model INPUT_MODEL
mo --input_model INPUT_MODEL
```
To learn about all Model Optimizer parameters and conversion technics, see the [Converting a Model to IR](prepare_model/convert_model/Converting_Model.md) page.

Expand Down
12 changes: 6 additions & 6 deletions docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Expand Up @@ -28,7 +28,7 @@ For example, to add the description of the `CustomReshape` layer, which is an ar

2. Generate a new parser:
```shell
cd <INSTALL_DIR>/tools/model_optimizer/mo/front/caffe/proto
cd <SITE_PACKAGES_WITH_INSTALLED_OPENVINO>/openvino/tools/mo/front/caffe/proto
python3 generate_caffe_pb2.py --input_proto <PATH_TO_CUSTOM_CAFFE>/src/caffe/proto/caffe.proto
```
where `PATH_TO_CUSTOM_CAFFE` is the path to the root directory of custom Caffe\*.
Expand Down Expand Up @@ -66,7 +66,7 @@ The mean file that you provide for the Model Optimizer must be in a `.binaryprot

#### 7. What does the message "Invalid proto file: there is neither 'layer' nor 'layers' top-level messages" mean? <a name="question-7"></a>

The structure of any Caffe\* topology is described in the `caffe.proto` file of any Caffe version. For example, in the Model Optimizer, you can find the following proto file, used by default: `<INSTALL_DIR>/tools/model_optimizer/mo/front/caffe/proto/my_caffe.proto`. There you can find the structure:
The structure of any Caffe\* topology is described in the `caffe.proto` file of any Caffe version. For example, in the Model Optimizer, you can find the following proto file, used by default: `mo/front/caffe/proto/my_caffe.proto`. There you can find the structure:
```
message NetParameter {
// ... some other parameters
Expand All @@ -81,7 +81,7 @@ This means that any topology should contain layers as top-level structures in `p

#### 8. What does the message "Old-style inputs (via 'input_dims') are not supported. Please specify inputs via 'input_shape'" mean? <a name="question-8"></a>

The structure of any Caffe\* topology is described in the `caffe.proto` file for any Caffe version. For example, in the Model Optimizer you can find the following `.proto` file, used by default: `<INSTALL_DIR>/tools/model_optimizer/mo/front/caffe/proto/my_caffe.proto`. There you can find the structure:
The structure of any Caffe\* topology is described in the `caffe.proto` file for any Caffe version. For example, in the Model Optimizer you can find the following `.proto` file, used by default: `mo/front/caffe/proto/my_caffe.proto`. There you can find the structure:
```sh
message NetParameter {

Expand Down Expand Up @@ -350,15 +350,15 @@ The specified input shape cannot be parsed. Please, define it in one of the foll
*
```shell
python3 mo.py --input_model <INPUT_MODEL>.caffemodel --input_shape (1,3,227,227)
mo --input_model <INPUT_MODEL>.caffemodel --input_shape (1,3,227,227)
```
*
```shell
python3 mo.py --input_model <INPUT_MODEL>.caffemodel --input_shape [1,3,227,227]
mo --input_model <INPUT_MODEL>.caffemodel --input_shape [1,3,227,227]
```
* In case of multi input topology you should also specify inputs:
```shell
python3 mo.py --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),(1,6,1,1)
mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),(1,6,1,1)
```
Keep in mind that there is no space between and inside the brackets for input shapes.
Expand Down
Expand Up @@ -39,9 +39,9 @@ A summary of the steps for optimizing and deploying a model that was trained wit
To convert a Caffe\* model:

1. Go to the `$INTEL_OPENVINO_DIR/tools/model_optimizer` directory.
2. Use the `mo.py` script to simply convert a model, specifying the path to the input model `.caffemodel` file and the path to an output directory with write permissions:
2. Use the `mo` script to simply convert a model, specifying the path to the input model `.caffemodel` file and the path to an output directory with write permissions:
```sh
python3 mo.py --input_model <INPUT_MODEL>.caffemodel --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <INPUT_MODEL>.caffemodel --output_dir <OUTPUT_MODEL_DIR>
```

Two groups of parameters are available to convert your model:
Expand Down Expand Up @@ -94,13 +94,13 @@ Caffe*-specific parameters:
* Launching the Model Optimizer for the [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `prototxt` file. This is needed when the name of the Caffe\* model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file. You must have write permissions for the output directory.

```sh
python3 mo.py --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt --output_dir <OUTPUT_MODEL_DIR>
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt --output_dir <OUTPUT_MODEL_DIR>
```

* Launching the Model Optimizer for the [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe\* system on the computer. To read more about this, see [Legacy Mode for Caffe* Custom Layers](../customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md).
Optional parameters without default values and not specified by the user in the `.prototxt` file are removed from the Intermediate Representation, and nested parameters are flattened:
```sh
python3 mo.py --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params --output_dir <OUTPUT_MODEL_DIR>
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params --output_dir <OUTPUT_MODEL_DIR>
```
This example shows a multi-input model with input layers: `data`, `rois`
```
Expand All @@ -124,7 +124,7 @@ layer {

* Launching the Model Optimizer for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the Model Optimizer along with a writable output directory. In particular, for data, set the shape to `1,3,227,227`. For rois, set the shape to `1,6,1,1`:
```sh
python3 mo.py --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),[1,6,1,1] --output_dir <OUTPUT_MODEL_DIR>
mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),[1,6,1,1] --output_dir <OUTPUT_MODEL_DIR>
```

## Custom Layer Definition
Expand Down
Expand Up @@ -34,9 +34,9 @@ A summary of the steps for optimizing and deploying a model that was trained wit
To convert a Kaldi\* model:

1. Go to the `<INSTALL_DIR>/tools/model_optimizer` directory.
2. Use the `mo.py` script to simply convert a model with the path to the input model `.nnet` or `.mdl` file and to an output directory where you have write permissions:
2. Use the `mo` script to simply convert a model with the path to the input model `.nnet` or `.mdl` file and to an output directory where you have write permissions:
```sh
python3 mo.py --input_model <INPUT_MODEL>.nnet --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <INPUT_MODEL>.nnet --output_dir <OUTPUT_MODEL_DIR>
```

Two groups of parameters are available to convert your model:
Expand All @@ -60,12 +60,12 @@ Kaldi-specific parameters:

* To launch the Model Optimizer for the wsj_dnn5b_smbr model with the specified `.nnet` file and an output directory where you have write permissions:
```sh
python3 mo.py --input_model wsj_dnn5b_smbr.nnet --output_dir <OUTPUT_MODEL_DIR>
mo --input_model wsj_dnn5b_smbr.nnet --output_dir <OUTPUT_MODEL_DIR>
```

* To launch the Model Optimizer for the wsj_dnn5b_smbr model with existing file that contains counts for the last layer with biases and a writable output directory:
```sh
python3 mo.py --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --output_dir <OUTPUT_MODEL_DIR>_
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --output_dir <OUTPUT_MODEL_DIR>_
```
* The Model Optimizer normalizes сounts in the following way:
\f[
Expand All @@ -83,7 +83,7 @@ python3 mo.py --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts -
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the
`--remove_output_softmax` flag.
```sh
python3 mo.py --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax --output_dir <OUTPUT_MODEL_DIR>_
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax --output_dir <OUTPUT_MODEL_DIR>_
```
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.

Expand Down
Expand Up @@ -44,9 +44,9 @@ A summary of the steps for optimizing and deploying a model that was trained wit
To convert an MXNet\* model:

1. Go to the `<INSTALL_DIR>/tools/model_optimizer` directory.
2. To convert an MXNet\* model contained in a `model-file-symbol.json` and `model-file-0000.params`, run the Model Optimizer launch script `mo.py`, specifying a path to the input model file and a path to an output directory with write permissions:
2. To convert an MXNet\* model contained in a `model-file-symbol.json` and `model-file-0000.params`, run the Model Optimizer launch script `mo`, specifying a path to the input model file and a path to an output directory with write permissions:
```sh
python3 mo_mxnet.py --input_model model-file-0000.params --output_dir <OUTPUT_MODEL_DIR>
mo --input_model model-file-0000.params --output_dir <OUTPUT_MODEL_DIR>
```

Two groups of parameters are available to convert your model:
Expand Down
Expand Up @@ -60,9 +60,9 @@ The Model Optimizer process assumes you have an ONNX model that was directly dow
To convert an ONNX\* model:

1. Go to the `<INSTALL_DIR>/tools/model_optimizer` directory.
2. Use the `mo.py` script to simply convert a model with the path to the input model `.nnet` file and an output directory where you have write permissions:
2. Use the `mo` script to simply convert a model with the path to the input model `.nnet` file and an output directory where you have write permissions:
```sh
python3 mo.py --input_model <INPUT_MODEL>.onnx --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <INPUT_MODEL>.onnx --output_dir <OUTPUT_MODEL_DIR>
```

There are no ONNX\* specific parameters, so only [framework-agnostic parameters](Converting_Model_General.md) are available to convert your model.
Expand Down
Expand Up @@ -33,10 +33,10 @@ A summary of the steps for optimizing and deploying a model trained with Paddle\

To convert a Paddle\* model:

1. Go to the `$INTEL_OPENVINO_DIR/tools/model_optimizer` directory.
2. Use the `mo.py` script to simply convert a model, specifying the framework, the path to the input model `.pdmodel` file and the path to an output directory with write permissions:
1. Activate environment with installed OpenVINO if needed
2. Use the `mo` script to simply convert a model, specifying the framework, the path to the input model `.pdmodel` file and the path to an output directory with write permissions:
```sh
python3 mo.py --input_model <INPUT_MODEL>.pdmodel --output_dir <OUTPUT_MODEL_DIR> --framework=paddle
mo --input_model <INPUT_MODEL>.pdmodel --output_dir <OUTPUT_MODEL_DIR> --framework=paddle
```

Parameters to convert your model:
Expand All @@ -47,7 +47,7 @@ Parameters to convert your model:
### Example of Converting a Paddle* Model
Below is the example command to convert yolo v3 Paddle\* network to OpenVINO IR network with Model Optimizer.
```sh
python3 mo.py --model_name yolov3_darknet53_270e_coco --output_dir <OUTPUT_MODEL_DIR> --framework=paddle --data_type=FP32 --reverse_input_channels --input_shape=[1,3,608,608],[1,2],[1,2] --input=image,im_shape,scale_factor --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1 --input_model=yolov3.pdmodel
mo --model_name yolov3_darknet53_270e_coco --output_dir <OUTPUT_MODEL_DIR> --framework=paddle --data_type=FP32 --reverse_input_channels --input_shape=[1,3,608,608],[1,2],[1,2] --input=image,im_shape,scale_factor --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1 --input_model=yolov3.pdmodel
```

## Supported Paddle\* Layers
Expand Down

0 comments on commit 980904c

Please sign in to comment.