diff --git a/docker/README.md b/docker/README.md index cac596f55..5d4efb758 100644 --- a/docker/README.md +++ b/docker/README.md @@ -17,6 +17,8 @@ Build the docker image docker build -t openvino/openvino_tensorflow_ubuntu20_runtime:2.2.0 - < ubuntu20/openvino_tensorflow_cgvh_runtime_2.2.0.dockerfile +### For Ubuntu + Launch the Jupyter server with **CPU** access: docker run -it --rm \ @@ -63,7 +65,32 @@ If execution fails on iGPU for 10th and 11th Generation Intel devices, provide d docker build -t openvino/openvino_tensorflow_ubuntu20_runtime:2.2.0 --build-arg INTEL_OPENCL=20.35.17767 - < ubuntu20/openvino_tensorflow_cgvh_runtime_2.2.0.dockerfile -# Dockerfiles for [TF-Serving](#https://github.com/tensorflow/serving) with OpenVINOTM integration with Tensorflow +### For Windows + +Launch the Jupyter server with **CPU** access: + +``` +docker run -it --rm \ + -p 8888:8888 \ + openvino/openvino_tensorflow_ubuntu20_runtime:2.2.0 +``` + +Launch the Jupyter server with **iGPU** access:
+ +Pre-requisites - + +* Windows* 10 21H2 or Windows* 11 with [WSL-2](https://docs.microsoft.com/en-us/windows/wsl/install) +* [Intel iGPU driver](https://www.intel.com/content/www/us/en/download/19344/intel-graphics-windows-dch-drivers.html) >= 30.0.100.9684 + +``` +docker run -it --rm \ + -p 8888:8888 \ + --device /dev/dxg:/dev/dxg \ + --volume /usr/lib/wsl:/usr/lib/wsl \ + openvino/openvino_tensorflow_ubuntu20_runtime:2.2.0 +``` + +# Dockerfiles for [TF-Serving](https://github.com/tensorflow/serving) with OpenVINOTM integration with Tensorflow The TF Serving dockerfile requires the **OpenVINO™ integration with TensorFlow Runtime** image to be built. Refer to the section above for instructions on building it. @@ -80,7 +107,7 @@ Build serving docker images: Here is an example to serve Resnet50 model using OpenVINO™ Integration with Tensorflow and a client script that performs inference on the model using the REST API. -1. Download [Resnet50 model](#https://storage.googleapis.com/tfhub-modules/google/imagenet/resnet_v2_50/classification/5.tar.gz) from TF Hub and untar its contents into the folder `resnet_v2_50_classifiation/5` +1. Download [Resnet50 model](https://storage.googleapis.com/tfhub-modules/google/imagenet/resnet_v2_50/classification/5.tar.gz) from TF Hub and untar its contents into the folder `resnet_v2_50_classifiation/5` 2. Start serving container for the resnet50 model: diff --git a/docker/README_cn.md b/docker/README_cn.md index 0603306ec..8ca5daaa4 100644 --- a/docker/README_cn.md +++ b/docker/README_cn.md @@ -62,7 +62,7 @@ OVTF_BRANCH:要使用的 OpenVINO™ integration with TensorFlow 分支。默 docker build -t openvino/openvino_tensorflow_ubuntu20_runtime:2.2.0 --build-arg INTEL_OPENCL=20.35.17767 - < ubuntu20/openvino_tensorflow_cgvh_runtime_2.2.0.dockerfile -# Dockerfiles for [TF-Serving](#https://github.com/tensorflow/serving) with OpenVINOTM integration with Tensorflow +# Dockerfiles for [TF-Serving](https://github.com/tensorflow/serving) with OpenVINOTM integration with Tensorflow The TF Serving dockerfile requires the **OpenVINO™ integration with TensorFlow Runtime** image to be built. Refer to the section above for instructions on building it. @@ -78,7 +78,7 @@ OVTF_VERSION: 要使用的 **OpenVINO™ integration with TensorFlow Runtime** 此处为Resnet50模型使用OpenVINO Integration with Tensorflow实例,提供了REST API相关客户端脚本。 -1. 从TF社区下载[Resnet50 model](#https://storage.googleapis.com/tfhub-modules/google/imagenet/resnet_v2_50/classification/5.tar.gz)并将其目录解压至`resnet_v2_50_classifiation/5`文件夹。 +1. 从TF社区下载[Resnet50 model](https://storage.googleapis.com/tfhub-modules/google/imagenet/resnet_v2_50/classification/5.tar.gz)并将其目录解压至`resnet_v2_50_classifiation/5`文件夹。 2. 启动resnet50模型的服务容器: diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md index 2af1759d7..94e70dbca 100644 --- a/docs/ARCHITECTURE.md +++ b/docs/ARCHITECTURE.md @@ -24,13 +24,18 @@ Operator Capability Manager (OCM) implements several checks on TensorFlow operat Graph partitioner examines the operators that are marked for clustering by OCM and performs further analysis on them. In this stage, the marked operators are first assigned to clusters. Some clusters are dropped after the analysis. For example, if the cluster size is very small or if the cluster is not supported by the backend after receiving more context, then the clusters are dropped and the operators fall back on native TensorFlow runtime. Each cluster of operators is then encapsulated into a custom operator that is executed in OpenVINO™ runtime. +#### TensorFlow Frontend + +[TensorFlow Frontend](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends/tensorflow) converts the TensorFlow operations in the clusters to OpenVINO™ Model +with the latest available [Operation Set](https://docs.openvino.ai/latest/openvino_docs_ops_opset.html) for OpenVINO™ toolkit. Once the model is created, it is compiled to the concrete OpenVINO™ plugin for inference. + #### TensorFlow Importer -TensorFlow importer translates the TensorFlow operators in the clusters to OpenVINO™ nGraph operators with the latest available [operator set](https://docs.OpenVINOtoolkit.org/latest/openvino_docs_ops_opset.html) for OpenVINO™ toolkit. An nGraph function is built for each of the clusters. Once created, it is wrapped into an OpenVINO™ CNNNetwork that holds the intermediate representation of the cluster to be executed in OpenVINO™ backend. +Since 2022.2 release TensorFlow Frontend executes functions of TensorFlow Importer and replaces it. In some exceptional cases **OpenVINO™ integration with TensorFlow** can fallback to TensorFlow Importer. #### Backend Manager -Backend manager creates a backend for the execution of the CNNNetwork. We implemented two types of backends: +Backend manager creates a backend for the execution of OpenVINO™ Model. We implemented two types of backends: * Basic backend * VAD-M backend @@ -39,4 +44,4 @@ Basic backend is used for Intel® CPUs, Intel® integrated VAD-M backend is used for Intel® Vision Accelerator Design with 8 Intel® Movidius™ MyriadX VPUs (referred to as VAD-M or HDDL). We use batch level parallelism for inference execution in the VAD-M backend. When the user provides a batched input, multiple inference requests are created, and inference is run in parallel on all the available VPUs in the VAD-M. -Backend Manager supports Dynamic Fallback which means if the execution of the corresponding CNNNetwork fails in OpenVINO™ runtime, the execution falls back to native TensorFlow runtime. +Backend Manager supports Dynamic Fallback which means if the execution of the corresponding model fails in OpenVINO™ runtime, the execution falls back to native TensorFlow runtime. diff --git a/docs/ARCHITECTURE_cn.md b/docs/ARCHITECTURE_cn.md index 81948bc2b..9100c6fa4 100644 --- a/docs/ARCHITECTURE_cn.md +++ b/docs/ARCHITECTURE_cn.md @@ -23,9 +23,13 @@ Operator Capability Manager (OCM) 对 TensorFlow 算子实施几项检查,以 Graph partitioner 检查 OCM 标记的节点,并对其进行进一步分析。在这一阶段,标记的算子首先分配给集群。一些集群会在分析之后被删除掉。例如,如果集群很小,或者在接收较多上下文后,集群不被后端支持,那么该集群将被删除,算子返回原生 TensorFlow 运行时。之后,每个算子集群都被封装到自定义算子中,在 OpenVINO™ 运行时中执行。 +#### TensorFlow Frontend + +[TensorFlow Frontend](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends/tensorflow) 将集群中的 TensorFlow 操作转换为具有最新可用 [操作集](https:// docs.openvino.ai/latest/openvino_docs_ops_opset.html) 用于 OpenVINO™ 工具包。 创建模型后,将其编译为具体的 OpenVINO™ 插件以进行推理。 + #### TensorFlow Importer -TensorFlow importer 通过用于 OpenVINO™ 工具套件的最新[算子集](https://docs.OpenVINOtoolkit.org/latest/openvino_docs_ops_opset.html)将集群中的 TensorFlow 算子解析为 OpenVINO™ nGraph 算子。nGraph 函数 专门用于各个集群。该函数创建后,将被封装到 OpenVINO™ CNNNetwork 中,该网络包含将在 OpenVINO™ 后端执行的集群的中间表示。 +从 2022.2 版本开始,TensorFlow Frontend 执行 TensorFlow Importer 的功能并替换它。 在某些特殊情况下,**OpenVINO™ 与 TensorFlow 的集成** 可以回退到 TensorFlow Importer。 #### Backend Manager diff --git a/docs/INSTALL.md b/docs/INSTALL.md index eb1e89ba4..eebd7fa85 100644 --- a/docs/INSTALL.md +++ b/docs/INSTALL.md @@ -23,7 +23,7 @@ * To use it: 1. Install tensorflow and openvino-tensorflow packages from PyPi as explained in the section above 2. Download & install Intel® Distribution of OpenVINO™ Toolkit 2022.2.0 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)). - 3. Initialize the OpenVINO™ environment by running the `setupvars.sh` located in \\/bin using the command below. This step needs to be executed in the same environment which is used for the TensorFlow model inference using openvino-tensorflow. + 3. Initialize the OpenVINO™ environment by running the `setupvars.sh` located in \ using the command below. This step needs to be executed in the same environment which is used for the TensorFlow model inference using openvino-tensorflow. source setupvars.sh diff --git a/docs/INSTALL_cn.md b/docs/INSTALL_cn.md index 6f6748e77..7964a8883 100644 --- a/docs/INSTALL_cn.md +++ b/docs/INSTALL_cn.md @@ -22,7 +22,7 @@ * 使用方法: 1. 按照上述方法从PyPi安装tensorflow 和 openvino-tensorflow。 2. 下载安装Intel® OpenVINO™ 2022.2.0发布版,一并安装其依赖([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html))。 - 3. 初始化Intel® OpenVINO™, 可以运行位于\\/bin 的命令脚本`setupvars.sh` 。该步骤需要在运行openvino-tensorflow做推理的同一个环境下执行。 + 3. 初始化Intel® OpenVINO™, 可以运行位于\ 的命令脚本`setupvars.sh` 。该步骤需要在运行openvino-tensorflow做推理的同一个环境下执行。 source setupvars.sh diff --git a/docs/USAGE.md b/docs/USAGE.md index da44e6b11..29073bf64 100644 --- a/docs/USAGE.md +++ b/docs/USAGE.md @@ -82,111 +82,145 @@ or ## Environment Variables -**OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS** - +- **OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS:** This variable is disabled by default, and it freezes variables from TensorFlow's ReadVariableOp as constants during the graph translation phase. Highly recommended to enable it to ensure optimal inference latencies on eagerly executed models. Disable it when model weights are modified after loading the model for inference. -Example: + Example: - OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS="1" + OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS="1" -**OPENVINO_TF_BACKEND:** +- **OPENVINO_TF_BACKEND:** Backend device name can be set using this variable. It should be set to "CPU", "GPU", "GPU_FP16", "MYRIAD", or "VAD-M". -Example: - - OPENVINO_TF_BACKEND="MYRIAD" + Example: + + OPENVINO_TF_BACKEND="MYRIAD" -**OPENVINO_TF_DISABLE:** +- **OPENVINO_TF_DISABLE:** Disables **OpenVINO™ integration with TensorFlow** if set to 1. -Example: - - OPENVINO_TF_DISABLE="1" + Example: + + OPENVINO_TF_DISABLE="1" -**OPENVINO_TF_LOG_PLACEMENT:** +- **OPENVINO_TF_LOG_PLACEMENT:** If this variable is set to 1, it will print the logs related to cluster formation and encapsulation. -Example: - - OPENVINO_TF_LOG_PLACEMENT="1" + Example: + + OPENVINO_TF_LOG_PLACEMENT="1" -**OPENVINO_TF_MIN_NONTRIVIAL_NODES:** +- **OPENVINO_TF_MIN_NONTRIVIAL_NODES:** This variable sets the minimum number of operators that can exist in a cluster. If the number of operators in a cluster is smaller than the specified number, the cluster will be de-assigned and all the Ops in it are executed using native TensorFlow. By default, it is calculated based on the total graph size, but it cannot be less than 6 unless it is set manually. (No performance benefit is observed by enabling very small clusters). To get a detailed cluster summary set "OPENVINO_TF_LOG_PLACEMENT" to 1. -Example: - - OPENVINO_TF_MIN_NONTRIVIAL_NODES="10" + Example: + + OPENVINO_TF_MIN_NONTRIVIAL_NODES="10" -**OPENVINO_TF_MAX_CLUSTERS:** +- **OPENVINO_TF_MAX_CLUSTERS:** This variable sets the maximum number of clusters selected for execution using OpenVINO™ backend. The clusters are selected based on the size (from highest to lowest), and this decision is made at the final stage of cluster de-assignment. Ops of remaining clusters are unmarked and are executed using native TensorFlow. Setting this environment variable is useful if there are some large clusters and a number of small clusters, and performance improves by scheduling only the large clusters using OpenVINO™ backend. To get a detailed cluster summary set "OPENVINO_TF_LOG_PLACEMENT" to 1. -Example: - - OPENVINO_TF_MAX_CLUSTERS="3" + Example: + + OPENVINO_TF_MAX_CLUSTERS="3" -**OPENVINO_TF_VLOG_LEVEL:** +- **OPENVINO_TF_VLOG_LEVEL:** This variable is used to print the execution logs. Setting it to 1 will print the minumum amount of details and setting it to 5 will print the most detailed logs. -Example: - - OPENVINO_TF_VLOG_LEVEL="4" + Example: + + OPENVINO_TF_VLOG_LEVEL="4" -**OPENVINO_TF_DISABLED_OPS:** +- **OPENVINO_TF_DISABLED_OPS:** A list of disabled operators can be passed using this variable. These operators will not be considered for clustering and they will fall back on to native TensorFlow. -Example: - - OPENVINO_TF_DISABLED_OPS="Squeeze,Greater,Gather,Unpack" + Example: + + OPENVINO_TF_DISABLED_OPS="Squeeze,Greater,Gather,Unpack" -**OPENVINO_TF_DUMP_GRAPHS:** +- **OPENVINO_TF_DUMP_GRAPHS:** Setting this will serialize the full graphs in all stages during the optimization pass and save them in the current directory. -Example: - - OPENVINO_TF_DUMP_GRAPHS="1" + Example: + + OPENVINO_TF_DUMP_GRAPHS="1" -**OPENVINO_TF_DUMP_CLUSTERS:** +- **OPENVINO_TF_DUMP_CLUSTERS:** Setting this variable to 1 will serialize all the clusters in ".pbtxt" format and save them in the current directory. -Example: - - OPENVINO_TF_DUMP_CLUSTERS="1" + Example: + + OPENVINO_TF_DUMP_CLUSTERS="1" -**OPENVINO_TF_ENABLE_BATCHING:** +- **OPENVINO_TF_ENABLE_BATCHING:** If this parameter is set to 1 while using VAD-M as the backend, the backend engine will divide the input into multiple asynchronous requests to utilize all devices in VAD-M to achieve better performance. -Example: - - OPENVINO_TF_ENABLE_BATCHING="1" + Example: + + OPENVINO_TF_ENABLE_BATCHING="1" -**OPENVINO_TF_DYNAMIC_FALLBACK** +- **OPENVINO_TF_DYNAMIC_FALLBACK** This variable enables or disables dynamic fallback feature. Should be set to "0" to disable and "1" to enable dynamic fallback. When enabled, clusters causing errors during runtime can fallback to native TensorFlow although they are assigned to run on OpenVINO™. Enabled by default. -Example: - - OPENVINO_TF_DYNAMIC_FALLBACK="0" + Example: + + OPENVINO_TF_DYNAMIC_FALLBACK="0" -**OPENVINO_TF_CONSTANT_FOLDING:** +- **OPENVINO_TF_CONSTANT_FOLDING:** This will enable/disable constant folding pass on the translated clusters (Disabled by default). -Example: - - OPENVINO_TF_CONSTANT_FOLDING="1" + Example: + + OPENVINO_TF_CONSTANT_FOLDING="1" -**OPENVINO_TF_TRANSPOSE_SINKING:** +- **OPENVINO_TF_TRANSPOSE_SINKING:** This will enable/disable transpose sinking pass on the translated clusters (Enabled by default). -Example: - - OPENVINO_TF_TRANSPOSE_SINKING="0" + Example: + + OPENVINO_TF_TRANSPOSE_SINKING="0" -**OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS:** +- **OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS:** After clusters are formed, some of the clusters may still fall back to native TensorFlow (e.g a cluster is too small, some conditions are not supported by the target device). If this variable is set, clusters will not be dropped and forced to run on OpenVINO™ backend. This may reduce the performance gain or may lead the execution to crash in some cases. -Example: + Example: + + OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS="1" + +- **OPENVINO_TF_DISABLE_TFFE:** +Starting from **OpenVINO™ integration with TensorFlow 2.2.0** release, TensorFlow operations are converted by [TensorFlow Frontend](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends/tensorflow) to the latest available [Operation Set](https://docs.openvino.ai/latest/openvino_docs_ops_opset.html) by OpenVINO™ toolkit except some exceptional cases. By setting **OPENVINO_TF_DISABLE_TFFE** to **1**, TensorFlow Frontend can be disabled. In that case, TensorFlow Importer (the default translator of **OpenVINO™ integration with TensorFlow 2.1.0** and earlier) will be used to translate TensorFlow operations for all backends. If this environment variable is set to **0**, TensorFlow Frontend will be enabled for all backends. As of **OpenVINO™ integration with TensorFlow 2.2.0** release, this environment variable is effective only on Ubuntu and Windows platforms and TensorFlow Frontend is not supported on MacOS yet. The table below shows the translation modules used for each backend and platform by default for **OpenVINO™ integration with TensorFlow 2.2.0**. + + | | **CPU** | **GPU** | **GPU_FP16** | **MYRIAD** | **VAD-M** | | + |-------------|-------------|-------------|--------------|-------------|-------------|----------------------------------------------------------------| + | **Ubuntu** | TF Frontend | TF Frontend | TF Frontend | TF Importer | TF Importer | _Environment variable changes the default translator_ | + | **Windows** | TF Frontend | TF Frontend | TF Frontend | TF Importer | TF Importer | _Environment variable changes the default translator_ | + | **MacOS** | TF Importer | TF Importer | TF Importer | TF Importer | TF Importer | _Environment variable is not effective_ | + + Example: + + OPENVINO_TF_DISABLE_TFFE="1" + +- **OPENVINO_TF_MODEL_CACHE_DIR:** +Using this environment variable, [OpenVINO™ model caching](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_caching_overview.html) can be enabled with **OpenVINO™ integration with TensorFlow**. A cache directory should be specified to be used to store cached files. Reusing a cached model can reduce the model compile time which impacts the first inference latency using **OpenVINO™ integration with TensorFlow**. Model caching is disabled by default. To enable it, the cache directory should be specified using this environment variable. **Note: Model caching support is experimental for OpenVINO™ integration with TensorFlow 2.2.0 release and it is not fully validated.** + + Example: + + OPENVINO_TF_MODEL_CACHE_DIR=path/to/model/cache/directory - OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS="1" +- **OPENVINO_TF_ENABLE_OVTF_PROFILING:** +When this environment variable is set to **1**, additional performance timing information will be printed as part of verbose logs. This environment variable should be used with **OPENVINO_TF_VLOG_LEVEL** environment variable and it is only effective when verbose log level is set to **1** or greater. + + Example: + + OPENVINO_TF_VLOG_LEVEL=1 + OPENVINO_TF_ENABLE_OVTF_PROFILING=1 + +- **OPENVINO_TF_ENABLE_PERF_COUNT:** +This environment variable is used to print operator level performance counter information. This is only supported by the CPU backend. + + Example: + + OPENVINO_TF_ENABLE_PERF_COUNT=1 ## GPU Precision diff --git a/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb b/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb index 1ac3170ae..435e74de4 100644 --- a/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb +++ b/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb @@ -298,6 +298,20 @@ "infer_openvino_tensorflow(model_file, file_name, input_height, input_width, input_mean, input_std, label_file )\n", "ovtf.enable()" ] + }, + { + "cell_type": "markdown", + "id": "9e7e3e6f", + "metadata": {}, + "source": [ + "## Notices & Disclaimers\n", + "\n", + "Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex​. \n", + "\n", + "Your costs and results may vary. \n", + "\n", + "Intel technologies may require enabled hardware, software or service activation.\n" + ] } ], "metadata": { diff --git a/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb b/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb index 11cc87751..ba1a27590 100644 --- a/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb +++ b/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb @@ -443,6 +443,20 @@ "infer_openvino_tensorflow(model_file, input_file, input_height, input_width, label_file, anchor_file, conf_threshold, iou_threshold )\n", "ovtf.enable()" ] + }, + { + "cell_type": "markdown", + "id": "9e7e3e6f", + "metadata": {}, + "source": [ + "## Notices & Disclaimers\n", + "\n", + "Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex​. \n", + "\n", + "Your costs and results may vary. \n", + "\n", + "Intel technologies may require enabled hardware, software or service activation.\n" + ] } ], "metadata": { diff --git a/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb b/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb index b282d3ab6..75764a03f 100644 --- a/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb +++ b/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb @@ -849,6 +849,20 @@ "\n", "plt.show()" ] + }, + { + "cell_type": "markdown", + "id": "9e7e3e6f", + "metadata": {}, + "source": [ + "## Notices & Disclaimers\n", + "\n", + "Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex​. \n", + "\n", + "Your costs and results may vary. \n", + "\n", + "Intel technologies may require enabled hardware, software or service activation.\n" + ] } ], "metadata": { diff --git a/examples/notebooks/README.md b/examples/notebooks/README.md index ffb4cc45c..8dd15978c 100644 --- a/examples/notebooks/README.md +++ b/examples/notebooks/README.md @@ -1,18 +1,18 @@

📚 OpenVINO™ integration with TensorFlow Notebooks

-Welcome to our collection of ready-to-run Jupyter notebooks that helps you to quickly try out **OpenVINO™ integration with TensorFlow**. The notebooks carry out popular deep learning tasks like Image Classification, and Object Detection in TensorFlow and demonstrate it to developers how to leverage our simple two-liner API for optimized deep learning inference, without ever leaving the TensorFlow and Python ecosystem. +Welcome to our collection of ready-to-run Jupyter* notebooks that helps you to quickly try out **OpenVINO™ integration with TensorFlow**. The notebooks carry out popular deep learning tasks like Image Classification, and Object Detection in TensorFlow and demonstrate it to developers how to leverage our simple two-liner API for optimized deep learning inference, without ever leaving the TensorFlow and Python* ecosystem. -The notebooks can be run on an **Intel CPU running a supported version of the Ubuntu OS (currently 18.04 or 20.04)**. We recommend a Python virtual environment to start the Jupyter server. +The notebooks can be run on an **Intel CPU running a supported version of the Ubuntu* OS (currently 18.04 or 20.04)**. We recommend a Python* virtual environment to start the Jupyter* server. -### 1. Install Python, Git, and GPU drivers (optional) +### 1. Install Python*, Git, and GPU drivers (optional) -You may need to install some additional libraries on Ubuntu Linux. These steps work on a clean install of Ubuntu Desktop 20.04, and should also work on Ubuntu 18.04 and 20.10, and on Ubuntu Server. +You may need to install some additional libraries on Ubuntu* Linux. These steps work on a clean install of Ubuntu* Desktop 20.04, and should also work on Ubuntu* 18.04 and 20.10, and on Ubuntu* Server. sudo apt-get update sudo apt-get upgrade sudo apt-get install python3-venv build-essential python3-dev git-all -If you have a CPU with an Intel Integrated Graphics Card, you can install the [Intel Graphics Compute Runtime](https://github.com/intel/compute-runtime) to enable inference on this device. The command for Ubuntu 20.04 is: +If you have a CPU with an Intel Integrated Graphics Card, you can install the [Intel Graphics Compute Runtime](https://github.com/intel/compute-runtime) to enable inference on this device. The command for Ubuntu* 20.04 is: Note: Only execute this command if you do not yet have OpenCL drivers installed. @@ -25,7 +25,7 @@ First, let's clone this repo to get access to the notebooks git clone https://github.com/openvinotoolkit/openvino_tensorflow cd openvino_tensorflow -Now, create a Python virtual environment and activate it +Now, create a Python* virtual environment and activate it python3 -m venv openvino_tensorflow_env source openvino_tensorflow_env/bin/activate @@ -42,4 +42,7 @@ To launch a single notebook, like the TFHub Object Detection notebook ## (Optional) Run these notebooks on Docker -Alternatively, if you want to skip a local setup and want a stable runtime consider our [docker instructions for runtime images](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docker). The images tagged `latest` start a Jupyter server by default. \ No newline at end of file +Alternatively, if you want to skip a local setup and want a stable runtime consider our [docker instructions for runtime images](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docker). The images tagged `latest` start a Jupyter* server by default. + +--- +\* Other names and brands may be claimed as the property of others. diff --git a/images/openvino_tensorflow_architecture.png b/images/openvino_tensorflow_architecture.png index 3baf5278b..7b0bc3eef 100644 Binary files a/images/openvino_tensorflow_architecture.png and b/images/openvino_tensorflow_architecture.png differ