Skip to content

Commit

Permalink
modify readme for Neural Coder (#1387)
Browse files Browse the repository at this point in the history
  • Loading branch information
kaikaiyao committed Oct 29, 2022
1 parent f728eb6 commit 74b3b38
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 6 deletions.
9 changes: 8 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,13 @@ dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.fit()
```
### Quantization with [JupyterLab Extension](./neural_coder/extensions/neural_compressor_ext_lab/README.md) (Experimental)
Search for ```jupyter-lab-neural-compressor``` in the Extension Manager in JupyterLab and install with one click:

<a target="_blank" href="./neural_coder/extensions/screenshots/extmanager.png">
<img src="./neural_coder/extensions/screenshots/extmanager.png" alt="Extension" width="35%" height="35%">
</a>

### Quantization with [GUI](./docs/bench.md)
```shell
# An ONNX Example
Expand All @@ -79,7 +86,7 @@ inc_bench
<img src="./docs/imgs/INC_GUI.gif" alt="Architecture">
</a>

### Quantization with [Auto-coding API](./neural_coder/docs/AutoQuant.md) (Experimental)
### Quantization with [Neural Coder](./neural_coder/docs/Quantization.md) (Experimental)

```python
from neural_coder import auto_quant
Expand Down
6 changes: 3 additions & 3 deletions neural_coder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ simultaneously on below PyTorch evaluation code, we generate the optimized code

## Getting Started!

### Auto-Quant Feature
We provide a feature named Auto-Quant that helps automatically enable quantization features on a PyTorch model script and automatically evaluates for the best performance on the model. It is a code-free solution that can help users enable quantization algorithms on a PyTorch model with no manual coding needed. Supported features include Post-Training Static Quantization, Post-Training Dynamic Quantization, and Mixed Precision. For more details please refer to this [guide](docs/AutoQuant.md).
### Neural Coder for Quantization
We provide a feature that helps automatically enable quantization on Deep Learning models and automatically evaluates for the best performance on the model. It is a code-free solution that can help users enable quantization algorithms on a model with no manual coding needed. Supported features include Post-Training Static Quantization, Post-Training Dynamic Quantization, and Mixed Precision. For more details please refer to this [guide](docs/AutoQuant.md).

### General Guide
We currently provide 3 main user-facing APIs: enable, bench and superbench.
We currently provide 3 main user-facing APIs for Neural Coder: enable, bench and superbench.
#### Enable
Users can use ```enable()``` to enable specific features into DL scripts:
```
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Auto-Quant Feature
Neural Coder for Quantization
===========================
This feature helps automatically enable quantization features on a PyTorch model script and automatically evaluates for the best performance on the model. It is a code-free solution that can help users enable quantization algorithms on a PyTorch model with no manual coding needed. Supported features include Post-Training Static Quantization, Post-Training Dynamic Quantization, and Mixed Precision.
This feature helps automatically enable quantization on Deep Learning models and automatically evaluates for the best performance on the model. It is a code-free solution that can help users enable quantization algorithms on a model with no manual coding needed. Supported features include Post-Training Static Quantization, Post-Training Dynamic Quantization, and Mixed Precision.


## Features Supported
Expand Down
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 74b3b38

Please sign in to comment.