Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ matrix:
- ${DICT_DIR}/lpot_dict.txt
output: ${DICT_DIR}/lpot_dict.dic
sources:
- ${REPO_DIR}/docs/*
- ${REPO_DIR}/docs/source/*.md
- ${REPO_DIR}/*.md
- ${REPO_DIR}/examples/**/*.md|!${REPO_DIR}/examples/pytorch/**/huggingface_models/**/*.md
- ${REPO_DIR}/neural_compressor/**/*.md
Expand Down
7 changes: 4 additions & 3 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,17 @@ jobs:
- uses: actions/checkout@v1
- name: Install dependencies
run: |
export PATH="$HOME/.local/bin:$PATH"
export PATH="$HOME/.local/bin:$PATH/docs"
sudo apt-get install -y python3-setuptools
pip3 install --user -r sphinx-requirements.txt
pip3 install --user -r docs/sphinx-requirements.txt
- name: Build the docs
run: |
export PATH="$HOME/.local/bin:$PATH"
cd docs/
make html
- name: Push the docs
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: _build/html
publish_dir: docs/_build/html
publish_branch: latestHTML
34 changes: 0 additions & 34 deletions Makefile

This file was deleted.

64 changes: 32 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Python version: 3.7, 3.8, 3.9, 3.10
# Or install nightly full version from pip (including GUI)
pip install -i https://test.pypi.org/simple/ neural-compressor-full
```
More installation methods can be found at [Installation Guide](./docs/installation_guide.md). Please check out our [FAQ](./docs/faq.md) for more details.
More installation methods can be found at [Installation Guide](./docs/source/installation_guide.md). Please check out our [FAQ](./docs/source/faq.md) for more details.

## Getting Started
### Quantization with Python API
Expand All @@ -71,7 +71,7 @@ Search for ```jupyter-lab-neural-compressor``` in the Extension Manager in Jupyt
<img src="./neural_coder/extensions/screenshots/extmanager.png" alt="Extension" width="35%" height="35%">
</a>

### Quantization with [GUI](./docs/bench.md)
### Quantization with [GUI](./docs/source/bench.md)
```shell
# An ONNX Example
pip install onnx==1.12.0 onnxruntime==1.12.1 onnxruntime-extensions
Expand All @@ -80,8 +80,8 @@ wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/
# Start GUI
inc_bench
```
<a target="_blank" href="./docs/imgs/INC_GUI.gif">
<img src="./docs/imgs/INC_GUI.gif" alt="Architecture">
<a target="_blank" href="./docs/source/_static/imgs/INC_GUI.gif">
<img src="./docs/source/_static/imgs/INC_GUI.gif" alt="Architecture">
</a>

## System Requirements
Expand All @@ -98,7 +98,7 @@ inc_bench

#### Intel® Neural Compressor quantized ONNX models support multiple hardware vendors through ONNX Runtime:

* Intel CPU, AMD/ARM CPU, and NVidia GPU. Please refer to the validated model [list](./docs/validated_model_list.md#Validated-ONNX-QDQ-INT8-models-on-multiple-hardware-through-ONNX-Runtime).
* Intel CPU, AMD/ARM CPU, and NVidia GPU. Please refer to the validated model [list](./docs/source/validated_model_list.md#Validated-ONNX-QDQ-INT8-models-on-multiple-hardware-through-ONNX-Runtime).

### Validated Software Environment

Expand Down Expand Up @@ -146,11 +146,11 @@ inc_bench
> Set the environment variable ``TF_ENABLE_ONEDNN_OPTS=1`` to enable oneDNN optimizations if you are using TensorFlow v2.6 to v2.8. oneDNN is the default for TensorFlow v2.9.

### Validated Models
Intel® Neural Compressor validated 420+ [examples](./examples) for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available [here](docs/validated_model_list.md).
Intel® Neural Compressor validated 420+ [examples](./examples) for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available [here](./docs/source/validated_model_list.md).

<div style = "width: 77%; margin-bottom: 2%;">
<a target="_blank" href="./docs/imgs/release_data.png">
<img src="./docs/imgs/release_data.png" alt="Architecture" width=800 height=500>
<a target="_blank" href="./docs/source/_static/imgs/release_data.png">
<img src="./docs/source/_static/imgs/release_data.png" alt="Architecture" width=800 height=500>
</a>
</div>

Expand All @@ -164,10 +164,10 @@ Intel® Neural Compressor validated 420+ [examples](./examples) for quantization
</thead>
<tbody>
<tr>
<td colspan="3" align="center"><a href="docs/design.md">Architecture</a></td>
<td colspan="3" align="center"><a href="./docs/source/design.md">Architecture</a></td>
<td colspan="2" align="center"><a href="https://github.com/intel/neural-compressor/tree/master/examples">Examples</a></td>
<td colspan="2" align="center"><a href="docs/bench.md">GUI</a></td>
<td colspan="2" align="center"><a href="docs/api-introduction.md">APIs</a></td>
<td colspan="2" align="center"><a href="./docs/source/bench.md">GUI</a></td>
<td colspan="2" align="center"><a href="./docs/source/api-introduction.md">APIs</a></td>
</tr>
<tr>
<td colspan="5" align="center"><a href="https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html">Intel oneAPI AI Analytics Toolkit</a></td>
Expand All @@ -181,10 +181,10 @@ Intel® Neural Compressor validated 420+ [examples](./examples) for quantization
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="docs/transform.md">Transform</a></td>
<td colspan="2" align="center"><a href="docs/dataset.md">Dataset</a></td>
<td colspan="2" align="center"><a href="docs/metric.md">Metric</a></td>
<td colspan="3" align="center"><a href="docs/objective.md">Objective</a></td>
<td colspan="2" align="center"><a href="./docs/source/transform.md">Transform</a></td>
<td colspan="2" align="center"><a href="./docs/source/dataset.md">Dataset</a></td>
<td colspan="2" align="center"><a href="./docs/source/metric.md">Metric</a></td>
<td colspan="3" align="center"><a href="./docs/source/objective.md">Objective</a></td>
</tr>
</tbody>
<thead>
Expand All @@ -194,20 +194,20 @@ Intel® Neural Compressor validated 420+ [examples](./examples) for quantization
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="docs/quantization.md">Quantization</a></td>
<td colspan="1" align="center"><a href="docs/pruning.md">Pruning(Sparsity)</a></td>
<td colspan="2" align="center"><a href="docs/distillation.md">Knowledge Distillation</a></td>
<td colspan="2" align="center"><a href="docs/mixed_precision.md">Mixed Precision</a></td>
<td colspan="2" align="center"><a href="docs/orchestration.md">Orchestration</a></td>
<td colspan="2" align="center"><a href="./docs/source/Quantization.md">Quantization</a></td>
<td colspan="1" align="center"><a href="./docs/source/pruning.md">Pruning(Sparsity)</a></td>
<td colspan="2" align="center"><a href="./docs/source/distillation.md">Knowledge Distillation</a></td>
<td colspan="2" align="center"><a href="./docs/source/mixed_precision.md">Mixed Precision</a></td>
<td colspan="2" align="center"><a href="./docs/source/orchestration.md">Orchestration</a></td>
</tr>
<tr>
<td colspan="2" align="center"><a href="docs/benchmark.md">Benchmarking</a></td>
<td colspan="3" align="center"><a href="docs/distributed.md">Distributed Training</a></td>
<td colspan="2" align="center"><a href="docs/model_conversion.md">Model Conversion</a></td>
<td colspan="2" align="center"><a href="docs/tensorboard.md">TensorBoard</a></td>
<td colspan="2" align="center"><a href="./docs/source/benchmark.md">Benchmarking</a></td>
<td colspan="3" align="center"><a href="./docs/source/distributed.md">Distributed Training</a></td>
<td colspan="2" align="center"><a href="./docs/source/model_conversion.md">Model Conversion</a></td>
<td colspan="2" align="center"><a href="./docs/source/tensorboard.md">TensorBoard</a></td>
</tr>
<tr>
<td colspan="4" align="center"><a href="docs/distillation_quantization.md">Distillation for Quantization</a></td>
<td colspan="4" align="center"><a href="./docs/source/distillation_quantization.md">Distillation for Quantization</a></td>
<td colspan="5" align="center"><a href="neural_coder">Neural Coder</a></td>
</tr>

Expand All @@ -219,9 +219,9 @@ Intel® Neural Compressor validated 420+ [examples](./examples) for quantization
</thead>
<tbody>
<tr>
<td colspan="3" align="center"><a href="docs/adaptor.md">Adaptor</a></td>
<td colspan="3" align="center"><a href="docs/tuning_strategies.md">Strategy</a></td>
<td colspan="3" align="center"><a href="docs/reference_examples.md">Reference Example</a></td>
<td colspan="3" align="center"><a href="./docs/source/adaptor.md">Adaptor</a></td>
<td colspan="3" align="center"><a href="./docs/source/tuning_strategies.md">Strategy</a></td>
<td colspan="3" align="center"><a href="./docs/source/reference_examples.md">Reference Example</a></td>
</tr>
</tbody>
</table>
Expand All @@ -235,13 +235,13 @@ Intel® Neural Compressor validated 420+ [examples](./examples) for quantization
* Neural Coder, a new plug-in for Intel Neural Compressor was covered by [Twitter](https://twitter.com/IntelDevTools/status/1583629213697212416), [LinkedIn](https://www.linkedin.com/posts/intel-software_oneapi-ai-deeplearning-activity-6989377309917007872-Dbzg?utm_source=share&utm_medium=member_desktop), and [Intel Developer Zone](https://mp.weixin.qq.com/s/LL-4eD-R0YagFgODM23oQA) from Intel, and [Twitter](https://twitter.com/IntelDevTools/status/1583629213697212416/retweets) and [LinkedIn](https://www.linkedin.com/feed/update/urn:li:share:6990377841435574272/) from Hugging Face. (Oct 2022)
* Intel Neural Compressor successfully landed on [GCP](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [AWS](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel) marketplace. (Oct 2022)

> View our [full publication list](docs/publication_list.md).
> View our [full publication list](./docs/source/publication_list.md).

## Additional Content

* [Release Information](docs/releases_info.md)
* [Contribution Guidelines](docs/contributions.md)
* [Legal Information](docs/legal_information.md)
* [Release Information](./docs/source/releases_info.md)
* [Contribution Guidelines](./docs/source/contributions.md)
* [Legal Information](./docs/source/legal_information.md)
* [Security Policy](SECURITY.md)
* [Intel® Neural Compressor Website](https://intel.github.io/neural-compressor)

Expand Down
18 changes: 0 additions & 18 deletions _static/custom.css

This file was deleted.

16 changes: 0 additions & 16 deletions api-documentation/api-reference.rst

This file was deleted.

10 changes: 0 additions & 10 deletions api-documentation/benchmark-api.rst

This file was deleted.

13 changes: 0 additions & 13 deletions api-documentation/objective-api.rst

This file was deleted.

10 changes: 0 additions & 10 deletions api-documentation/pruning-api.rst

This file was deleted.

7 changes: 0 additions & 7 deletions api-documentation/quantization-api.rst

This file was deleted.

44 changes: 44 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SOURCEDIR = source
BUILDDIR = _build
IMGDIR = source/_static/imgs
BUILDIMGDIR = _build/html/imgs
CODEIMGDIR = _build/html/_static

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile


html:
# cp README.md to docs, modify response-link
cp -f "../README.md" "./source/README.md"
cp -f "./source/README.md" "./source/README.md.tmp"
sed 's/.md/.html/g; s/.\/docs\/source\//.\//g; s/.\/neural_coder\/extensions\/screenshots/imgs/g; s/.\/docs\/source\/_static/..\/\/_static/g;' "./source/README.md.tmp" > "./source/README.md"
rm -f "./source/README.md.tmp"

# make sure other png can display normal
$(SPHINXBUILD) -b html "$(SOURCEDIR)" "$(BUILDDIR)/html" $(SPHINXOPTS) $(O)

cp source/_static/index.html $(BUILDDIR)/html/index.html
mkdir -p "$(BUILDIMGDIR)"
# common svg
cp -f "$(CODEIMGDIR)/imgs/common/code.svg" "$(CODEIMGDIR)/images/view-page-source-icon.svg"
cp -f "$(CODEIMGDIR)/imgs/common/right.svg" "$(CODEIMGDIR)/images/chevron-right-orange.svg"

cp "../neural_coder/extensions/screenshots/extmanager.png" "$(BUILDIMGDIR)/extmanager.png"
cp "$(IMGDIR)/INC_GUI.gif" "$(BUILDIMGDIR)/INC_GUI.gif"
cp "$(IMGDIR)/release_data.png" "$(BUILDIMGDIR)/release_data.png"


# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
15 changes: 0 additions & 15 deletions docs/design.md

This file was deleted.

Loading