Skip to content

Commit

Permalink
Merge pull request #41 from microsoft/nn-meter-refactor
Browse files Browse the repository at this point in the history
Refactor of nn-Meter project
  • Loading branch information
Lynazhang committed Nov 9, 2021
2 parents 98bc134 + 243a079 commit 759854e
Show file tree
Hide file tree
Showing 73 changed files with 442 additions and 409 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,16 +98,16 @@ After installation, a command named `nn-meter` is enabled. To predict the latenc

```bash
# for Tensorflow (*.pb) file
nn-meter lat_pred --predictor <hardware> [--predictor-version <version>] --tensorflow <pb-file_or_folder>
nn-meter predict --predictor <hardware> [--predictor-version <version>] --tensorflow <pb-file_or_folder>

# for ONNX (*.onnx) file
nn-meter lat_pred --predictor <hardware> [--predictor-version <version>] --onnx <onnx-file_or_folder>
nn-meter predict --predictor <hardware> [--predictor-version <version>] --onnx <onnx-file_or_folder>

# for torch model from torchvision model zoo (str)
nn-meter lat_pred --predictor <hardware> [--predictor-version <version>] --torchvision <model-name> <model-name>...
nn-meter predict --predictor <hardware> [--predictor-version <version>] --torchvision <model-name> <model-name>...

# for nn-Meter IR (*.json) file
nn-meter lat_pred --predictor <hardware> [--predictor-version <version>] --nn-meter-ir <json-file_or_folder>
nn-meter predict --predictor <hardware> [--predictor-version <version>] --nn-meter-ir <json-file_or_folder>
```

`--predictor-version <version>` arguments is optional. When the predictor version is not specified by users, nn-meter will use the latest version of the predictor.
Expand Down
11 changes: 11 additions & 0 deletions docs/dataset.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Benchmark dataset

To evaluate the effectiveness of a prediction model on an arbitrary DNN model, we need a representative dataset that covers a large prediction scope. nn-Meter collects and generates 26k CNN models (Please refer the paper for the dataset generation method).

We release the dataset, and provide an interface of `nn_meter.dataset` for users to get access to the dataset. This interface could automatically download the nn-Meter bench dataset and return the path of the dataset when calling. Users can also download the data from the [Download Link](https://github.com/microsoft/nn-Meter/releases/download/v1.0-data/datasets.zip) on their own. This [example](../examples/nn-meter_predictor_for_bench_dataset.ipynb) shows how to use nn-Meter predictor to predict latency for the bench dataset.

**Note:** to measure the inference latency of models in this dataset, we generate tensorflow pb and tflite models and measure their latency on the target devices. However, since it requires hundreds of GB storage to store the full dataset, we didn't include these model files. Instead, we parse the pb files and record the model structures and parameters in
`nn_meter.dataset`.

Since the dataset is encoded in a graph format, we also provide an interface of `nn_meter.dataset.gnn_dataloader` for GNN training. By this interface, `GNNDataset` and `GNNDataloader` build the model structure of the bench dataset in `.jsonl` format into GNN required dataset and data loader. Users could refer to this [example](../examples/nn-meter_dataset_for_gnn.ipynb) for further information of `gnn_dataloader`. Note that to apply nn-Meter bench dataset for GNN training, the package `torch` and `dgl` should be installed.

14 changes: 7 additions & 7 deletions docs/input_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@ You can save tensorflow models into frozen pb formats, and use the following nn-

```bash
# for Tensorflow (*.pb) file
nn-meter --predictor <hardware> --tensorflow <pb-file>
nn-meter predict --predictor <hardware> [--predictor-version <version>] --tensorflow <pb-file_or_folder>
```

For the other frameworks (e.g., PyTorch), you can convert the models into onnx models, and use the following nn-meter command to predict the latency:

```bash
# for ONNX (*.onnx) file
nn-meter --predictor <hardware> --onnx <onnx-file>
nn-meter predict --predictor <hardware> [--predictor-version <version>] --onnx <onnx-file_or_folder>
```

You can download the test [tensorflow models]("https://github.com/Lynazhang/nnmeter/releases/download/0.1/pb_models.zip") and [onnx models](https://github.com/Lynazhang/nnmeter/releases/download/0.1/onnx_models.zip).
You can download the test [tensorflow models]("https://github.com/microsoft/nn-Meter/releases/download/v1.0-data/pb_models.zip") and [onnx models](https://github.com/microsoft/nn-Meter/releases/download/v1.0-data/onnx_models.zip).

### Input model as a code object

Expand All @@ -29,7 +29,7 @@ You can also directly apply nn-Meter in your python code. In this case, please d
```python
from nn_meter import load_latency_predictor

predictor = load_lat_predictor(hardware_name) # case insensitive in backend
predictor = load_latency_predictor(hardware_name) # case insensitive in backend

# build your model here
model = ... # model is instance of torch.nn.Module
Expand Down Expand Up @@ -57,14 +57,14 @@ For a *node*, we use the identical node name ("conv1.conv/Conv2D") as the node k
* outbounds: a list of outgoing node names. The inbounds and outbounds describe the node connections.
* attr: a set of attributes for the node. The attributes can be different for different types of NN node.

You can download the example nn-Meter IR graphs through [here](https://github.com/Lynazhang/nnmeter/releases/download/0.1/ir_graphs.zip).
You can download the example nn-Meter IR graphs through [here](https://github.com/microsoft/nn-Meter/releases/download/v1.0-data/ir_graphs.zip).

When you have a large amount of models to predict, you can also convert them into nn-Meter IR graphs to save the pre-processing time:

```
# for Tensorflow (*.pb) file
nn-meter getir --tensorflow <pb-file> --output <output-name>
nn-meter get_ir --tensorflow <pb-file> [--output <output-name>]
# for ONNX (*.onnx) file
nn-meter getir --onnx <onnx-file> --output <output-name>
nn-meter get_ir --onnx <onnx-file> [--output <output-name>]
```
6 changes: 4 additions & 2 deletions docs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ If you have a new hardware to predict DNN latency, a re-run of nn-Meter is requ
## Learn More
- [Get started](quick_start.md)

- [How to use nn-Meter](usage.md)
- [How to use nn-Meter Predictor](predictor/usage.md)

- [nn-meter in hardware-aware NAS](hardware-aware-model-design.md)
- [nn-Meter in hardware-aware NAS](predictor/hardware-aware-model-design.md)

- [nn-Meter bench dataset](dataset.md)

0 comments on commit 759854e

Please sign in to comment.