Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .ci_local_test/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@


The Jenkinsfile Introduce:
Jenkins file Introduction:

1. The jenkins matchine would scan the ROS2_Openvion project regularly.
1. The Jenkins machine would scan the ROS2_OpenVINO project regularly.

it would trigger test when scan the PR or other change.

Expand Down
38 changes: 14 additions & 24 deletions .ci_local_test/ros2_openvino_toolkit_test/docker_run.sh
Original file line number Diff line number Diff line change
@@ -1,43 +1,33 @@
#!/bin/bash

export DISPLAY=:0

export work_dir=$PWD


function run_container() {

docker images | grep ros2_openvino_docker

if [ $? -eq 0 ]
then
echo "the image of ros2_openvino_docker:01 existence"
function run_container()
{
if docker images -q ros2_openvino_docker:01 &>/dev/null; then
echo "The container ros2_openvino_docker:01 image exists"
docker rmi -f ros2_openvino_docker:01
fi

docker ps -a | grep ros2_openvino_container
if [ $? -eq 0 ]
then
if docker ps -aq -f name=ros2_openvino_container; then
echo "The container ros2_openvino_container exists. Removing the container..."
docker rm -f ros2_openvino_container
fi

# Removing some docker image ..
# Using jenkins server ros2_openvino_toolkit code instead of git clone code.
cd $work_dir && sed -i '/RUN git clone -b ros2/d' Dockerfile
cd "$work_dir" && sed -i '/RUN git clone -b ros2/d' Dockerfile
# add the jpg for test.
cd $work_dir && sed -i '$i COPY jpg /root/jpg' Dockerfile

cd $work_dir && docker build --build-arg ROS_PRE_INSTALLED_PKG=galactic-desktop --build-arg VERSION=galactic -t ros2_openvino_docker:01 .
cd $work_dir && docker images
docker run -i --privileged=true --device=/dev/dri -v $work_dir/ros2_openvino_toolkit:/root/catkin_ws/src/ros2_openvino_toolkit -v $HOME/.Xauthority:/root/.Xauthority -e GDK_SCALE -v $work_dir/test_cases:/root/test_cases --name ros2_openvino_container ros2_openvino_docker:01 bash -c "cd /root/test_cases && ./run.sh galactic"
cd "$work_dir" && sed -i '$i COPY jpg /root/jpg' Dockerfile

cd "$work_dir" && docker build --build-arg ROS_PRE_INSTALLED_PKG=galactic-desktop --build-arg VERSION=galactic -t ros2_openvino_docker:01 .
cd "$work_dir" && docker images
docker run -i --privileged=true --device=/dev/dri -v "$work_dir"/ros2_openvino_toolkit:/root/catkin_ws/src/ros2_openvino_toolkit -v "$HOME"/.Xauthority:/root/.Xauthority -e GDK_SCALE -v "$work_dir"/test_cases:/root/test_cases --name ros2_openvino_container ros2_openvino_docker:01 bash -c "cd /root/test_cases && ./run.sh galactic"
}

run_container
if [ $? -ne 0 ]
then
echo "Test fail"
exit -1
if ! run_container; then
echo "Test failed"
exit 1
fi


Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,16 @@
mkdir -p /opt/openvino_toolkit/models
#apt install -y python-pip
apt install -y python3.8-venv
cd ~ && python3 -m venv openvino_env && source openvino_env/bin/activate
cd ~ && python3 -m venv openvino_env
#shellcheck source=/dev/null
source openvino_env/bin/activate
python -m pip install --upgrade pip
pip install openvino-dev[tensorflow2,onnx]==2022.3

pip install "openvino-dev[tensorflow2,onnx]==2022.3"

#Download the optimized Intermediate Representation (IR) of model (execute once)
cd ~/catkin_ws/src/ros2_openvino_toolkit/data/model_list && omz_downloader --list download_model.lst -o /opt/openvino_toolkit/models/

cd ~/catkin_ws/src/ros2_openvino_toolkit/data/model_list && omz_converter --list convert_model.lst -d /opt/openvino_toolkit/models/ -o /opt/openvino_toolkit/models/convert


#Copy label files (execute once)
cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/face_detection/face-detection-adas-0001.labels /opt/openvino_toolkit/models/intel/face-detection-adas-0001/FP32/
cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/face_detection/face-detection-adas-0001.labels /opt/openvino_toolkit/models/intel/face-detection-adas-0001/FP16/
Expand All @@ -27,4 +26,3 @@ cp /opt/openvino_toolkit/models/convert/public/mask_rcnn_inception_resnet_v2_atr

cd /root/test_cases/ && ./yolov5_model_download.sh
cd /root/test_cases/ && ./yolov8_model_download.sh

9 changes: 5 additions & 4 deletions .ci_local_test/ros2_openvino_toolkit_test/test_cases/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,12 @@ then
else
export ros2_branch=$1
fi
source /root/test_cases/config.sh $ros2_branch
#shellcheck source=/dev/null
source /root/test_cases/config.sh "$ros2_branch"

cd /root/catkin_ws && colcon build --symlink-install
cd /root/catkin_ws && source ./install/local_setup.bash
# shellcheck source=/dev/null
source ./install/local_setup.bash

apt-get update
# apt-get install -y ros-$ros2_branch-diagnostic-updater
Expand All @@ -31,6 +33,5 @@ result=$?
echo "Test ENV:" && df -h && free -g
if [ $result -ne 0 ]
then
exit -1
exit 1
fi

Original file line number Diff line number Diff line change
Expand Up @@ -5,34 +5,32 @@ cd /root && git clone https://github.com/ultralytics/yolov5.git

#Set Environment for Installing YOLOv5

cd yolov5
cd yolov5 || exit
python3 -m venv yolo_env # Create a virtual python environment
# shellcheck source=/dev/null
source yolo_env/bin/activate # Activate environment
pip install -r requirements.txt # Install yolov5 prerequisites
pip install wheel
pip install onnx

# Download PyTorch Weights
mkdir -p /root/yolov5/model_convert && cd /root/yolov5/model_convert
mkdir -p /root/yolov5/model_convert && cd /root/yolov5/model_convert || exit
wget https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n.pt

cd /root/yolov5
cd /root/yolov5 || exit
python3 export.py --weights model_convert/yolov5n.pt --include onnx


#2. Convert ONNX files to IR files
cd /root/yolov5/
cd /root/yolov5/ || exit
python3 -m venv ov_env # Create openVINO virtual environment
# shellcheck source=/dev/null
source ov_env/bin/activate # Activate environment
python -m pip install --upgrade pip # Upgrade pip
pip install openvino[onnx]==2022.3.0 # Install OpenVINO for ONNX
pip install openvino-dev[onnx]==2022.3.0 # Install OpenVINO Dev Tool for ONNX

pip install "openvino[onnx]==2022.3.0" # Install OpenVINO for ONNX
pip install "openvino-dev[onnx]==2022.3.0" # Install OpenVINO Dev Tool for ONNX

cd /root/yolov5/model_convert
cd /root/yolov5/model_convert || exit
mo --input_model yolov5n.onnx


mkdir -p /opt/openvino_toolkit/models/convert/public/yolov5n/FP32/
sudo cp yolov5n.bin yolov5n.mapping yolov5n.xml /opt/openvino_toolkit/models/convert/public/yolov5n/FP32/

Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,21 @@

#Pip install the ultralytics package including all requirements in a Python>=3.7 environment with PyTorch>=1.7.

mkdir -p yolov8 && cd yolov8
mkdir -p yolov8 && cd yolov8 || exit
pip install ultralytics
apt install python3.8-venv
python3 -m venv openvino_env
# shellcheck source=/dev/null
source openvino_env/bin/activate


#Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
# export official model
yolo export model=yolov8n.pt format=openvino
yolo export model=yolov8n-seg.pt format=openvino


# Move to the Recommended Model Path
mkdir -p /opt/openvino_toolkit/models/convert/public/FP32/yolov8n
mkdir -p /opt/openvino_toolkit/models/convert/public/FP32/yolov8n-seg

cp yolov8n_openvino_model/* /opt/openvino_toolkit/models/convert/public/FP32/yolov8n
cp yolov8n-seg_openvino_model/* /opt/openvino_toolkit/models/convert/public/FP32/yolov8n-seg

20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
# Overview
## ROS2 Version Supported

|Branch Name|ROS2 Version Supported|Openvino Version|OS Version|
|Branch Name|ROS2 Version Supported|OpenVINO Version|OS Version|
|-----------------------|-----------------------|--------------------------------|----------------------|
|[ros2](https://github.com/intel/ros2_openvino_toolkit/tree/ros2)|Galactic, Foxy, Humble|V2022.1, V2022.2, V2022.3|Ubuntu 20.04, Ubuntu 22.04|
|[dashing](https://github.com/intel/ros2_openvino_toolkit/tree/dashing)|Dashing|V2022.1, V2022.2, V2022.3|Ubuntu 18.04|
Expand All @@ -50,12 +50,12 @@
|**OS**|Mandatory|We only tested this project under Ubuntu distros. It is recommended to install the corresponding Ubuntu Distro according to the ROS distro that you select to use. **For example: Ubuntu 18.04 for dashing, Ubuntu 20.04 for Foxy and Galactic, Ubuntu 22.04 for Humble.**|
|**ROS2**|Mandatory|We have already supported active ROS distros (Humble, Galactic, Foxy and Dashing (deprecated)). Choose the one matching your needs. You may find the corresponding branch from the table above in section [**ROS2 Version Supported**](#ros2-version-supported).|
|**OpenVINO**|Mandatory|The version of OpenVINO toolkit is decided by the OS and ROS2 distros you use. See the table above in Section [**ROS2 Version Supported**](#ros2-version-supported).|
|**Realsense Camera**|Optional|Realsense Camera is optional, you may choose these alternatives as the input: Standard Camera, ROS Image Topic, Video/Image File or RTSP camera.|
|**RealSense Camera**|Optional|RealSense Camera is optional, you may choose these alternatives as the input: Standard Camera, ROS Image Topic, Video/Image File or RTSP camera.|

# Introduction
## Design Architecture
<p><details><summary>Architecture Design</summary>
From the view of hirarchical architecture design, the package is divided into different functional components, as shown in below picture.
From the view of hierarchical architecture design, the package is divided into different functional components, as shown in below picture.

![OpenVINO_Architecture](./data/images/design_arch.PNG "OpenVINO RunTime Architecture")

Expand Down Expand Up @@ -83,16 +83,16 @@ See more from [here](https://github.com/openvinotoolkit/openvino) for Intel Open
<details>
<summary>ROS Input & Output</summary>

- **Diversal Input resources** are data resources to be infered and analyzed with the OpenVINO framework.
- **ROS interfaces and outputs** currently include _Topic_ and _service_. Natively, RViz output and CV image window output are also supported by refactoring topic message and inferrence results.
- **Diverse Input resources** are data resources to be inferred and analyzed with the OpenVINO framework.
- **ROS interfaces and outputs** currently include _Topic_ and _service_. Natively, RViz output and CV image window output are also supported by refactoring topic message and inference results.
</details>
</p>

<p>
<details>
<summary>Optimized Models</summary>

- **Optimized Models** provided by Model Optimizer component of Intel® OpenVINO™ toolkit. Imports trained models from various frameworks (Caffe*, Tensorflow*, MxNet*, ONNX*, Kaldi*) and converts them to a unified intermediate representation file. It also optimizes topologies through node merging, horizontal fusion, eliminating batch normalization, and quantization. It also supports graph freeze and graph summarize along with dynamic input freezing.
- **Optimized Models** provided by Model Optimizer component of Intel® OpenVINO™ toolkit. Imports trained models from various frameworks (Caffe*, TensorFlow*, MxNet*, ONNX*, Kaldi*) and converts them to a unified intermediate representation file. It also optimizes topologies through node merging, horizontal fusion, eliminating batch normalization, and quantization. It also supports graph freeze and graph summarize along with dynamic input freezing.
</details>
</p>
</details></p>
Expand All @@ -103,15 +103,15 @@ From the view of logic implementation, the package introduces the definitions of

![Logic_Flow](./data/images/impletation_logic.PNG "OpenVINO RunTime Logic Flow")

Once a corresponding program is launched with a specified .yaml config file passed in the .launch file or via commandline, _**parameter manager**_ analyzes the configurations about pipeline and the whole framework, then shares the parsed configuration information with pipeline procedure. A _**pipeline instance**_ is created by following the configuration info and is added into _**pipeline manager**_ for lifecycle control and inference action triggering.
Once a corresponding program is launched with a specified .yaml config file passed in the .launch file or via command line, _**parameter manager**_ analyzes the configurations about pipeline and the whole framework, then shares the parsed configuration information with pipeline procedure. A _**pipeline instance**_ is created by following the configuration info and is added into _**pipeline manager**_ for lifecycle control and inference action triggering.

The contents in **.yaml config file** should be well structured and follow the supported rules and entity names. Please see [yaml configuration guidance](./doc/quick_start/yaml_configuration_guide.md) for how to create or edit the config files.

<p>
<details>
<summary>Pipeline</summary>

**Pipeline** fulfills the whole data handling process: initiliazing Input Component for image data gathering and formating; building up the structured inference network and passing the formatted data through the inference network; transfering the inference results and handling output, etc.
**Pipeline** fulfills the whole data handling process: initializing Input Component for image data gathering and formatting; building up the structured inference network and passing the formatted data through the inference network; transfering the inference results and handling output, etc.
</details>
</p>

Expand Down Expand Up @@ -235,7 +235,7 @@ For the snapshot of demo results, refer to the following picture.

# Installation and Launching
## Deploy in Local Environment
* Refer to the quick start document for [getting_started_with_ros2](./doc/quick_start/getting_started_with_ros2_ov2.0.md) for detailed installation & lauching instructions.
* Refer to the quick start document for [getting_started_with_ros2](./doc/quick_start/getting_started_with_ros2_ov2.0.md) for detailed installation & launching instructions.
* Refer to the quick start document for [yaml configuration guidance](./doc/quick_start/yaml_configuration_guide.md) for detailed configuration guidance.

## Deploy in Docker
Expand Down Expand Up @@ -274,7 +274,7 @@ For the snapshot of demo results, refer to the following picture.
* Report questions, issues and suggestions, using: [issue](https://github.com/intel/ros2_openvino_toolkit/issues).

# More Information
* ROS2 OpenVINO discription written in Chinese: https://mp.weixin.qq.com/s/BgG3RGauv5pmHzV_hkVAdw
* ROS2 OpenVINO description written in Chinese: https://mp.weixin.qq.com/s/BgG3RGauv5pmHzV_hkVAdw

###### *Any security issue should be reported using process at https://01.org/security*

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
neutual
neutral
happy
sad
supprise
surprise
anger
6 changes: 3 additions & 3 deletions doc/quick_start/getting_started_with_Dashing_Ubuntu18.04.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_segmentation/fr
sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_detection/vehicle-license-plate-detection-barrier-0106.labels /opt/openvino_toolkit/models/vehicle-license-plate-detection/output/intel/vehicle-license-plate-detection-barrier-0106/FP32
```

* If the model (tensorflow, caffe, MXNet, ONNX, Kaldi)need to be converted to intermediate representation (For example the model for object detection)
* If the model (TensorFlow, caffe, MXNet, ONNX, Kaldi)need to be converted to intermediate representation (For example the model for object detection)
* ssd_mobilenet_v2_coco
```
cd /opt/openvino_toolkit/models/
Expand All @@ -94,7 +94,7 @@ sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_detection/vehic
sudo python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/converter.py --name=yolo-v2-tf --mo /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py
```

* Before launch, check the parameter configuration in ros2_openvino_toolkit/sample/param/xxxx.yaml, make sure the paramter like model path, label path, inputs are right.
* Before launch, check the parameter configuration in ros2_openvino_toolkit/sample/param/xxxx.yaml, make sure the parameter like model path, label path, inputs are right.
* run face detection sample code input from StandardCamera.
```
ros2 launch dynamic_vino_sample pipeline_people.launch.py
Expand Down Expand Up @@ -129,7 +129,7 @@ sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_detection/vehic
```

# More Information
* ROS2 OpenVINO discription writen in Chinese: https://mp.weixin.qq.com/s/BgG3RGauv5pmHzV_hkVAdw
* ROS2 OpenVINO description written in Chinese: https://mp.weixin.qq.com/s/BgG3RGauv5pmHzV_hkVAdw

###### *Any security issue should be reported using process at https://01.org/security*

Expand Down
12 changes: 6 additions & 6 deletions doc/quick_start/getting_started_with_ros2_ov2.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ For ROS2 foxy and galactic on ubuntu 20.04:

* Install Intel® OpenVINO™ Toolkit Version: 2022.3.</br>
Refer to: [OpenVINO_install_guide](https://docs.openvino.ai/2022.3/openvino_docs_install_guides_installing_openvino_apt.html#doxid-openvino-docs-install-guides-installing-openvino-apt)
* Install from an achive file. Both runtime and development tool are needed, `pip` is recommended for installing the development tool.</br>
* Install from an archive file. Both runtime and development tool are needed, `pip` is recommended for installing the development tool.</br>
Refer to: [OpenVINO_devtool_install_guide](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)

* Install Intel® RealSense™ SDK.</br>
Expand Down Expand Up @@ -54,7 +54,7 @@ source ./install/local_setup.bash
## 3. Running the Demo
### Install OpenVINO 2022.3 by PIP
OMZ tools are provided for downloading and converting models of open_model_zoo in ov2022.</br>
Refer to: [OMZtool_guide](https://pypi.org/project/openvino-dev/)
Refer to: [OMZ-tool_guide](https://pypi.org/project/openvino-dev/)

* See all available models
```
Expand All @@ -67,7 +67,7 @@ cd ~/catkin_ws/src/ros2_openvino_toolkit/data/model_list
omz_downloader --list download_model.lst -o /opt/openvino_toolkit/models/
```

* If the model (tensorflow, caffe, MXNet, ONNX, Kaldi) need to be converted to intermediate representation (such as the model for object detection):
* If the model (TensorFlow, caffe, MXNet, ONNX, Kaldi) need to be converted to intermediate representation (such as the model for object detection):
```
cd ~/catkin_ws/src/ros2_openvino_toolkit/data/model_list
omz_converter --list convert_model.lst -d /opt/openvino_toolkit/models/ -o /opt/openvino_toolkit/models/convert
Expand All @@ -85,7 +85,7 @@ cd ~/openvino/thirdparty/open_model_zoo/tools/model_tools
sudo python3 downloader.py --list download_model.lst -o /opt/openvino_toolkit/models/
```

* If the model (tensorflow, caffe, MXNet, ONNX, Kaldi) need to be converted to Intermediate Representation (such as the model for object detection):
* If the model (TensorFlow, caffe, MXNet, ONNX, Kaldi) need to be converted to Intermediate Representation (such as the model for object detection):
```
cd ~/openvino/thirdparty/open_model_zoo/tools/model_tools
sudo python3 converter.py --list convert_model.lst -d /opt/openvino_toolkit/models/ -o /opt/openvino_toolkit/models/convert
Expand All @@ -102,7 +102,7 @@ sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_segmentation/fr
sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_detection/vehicle-license-plate-detection-barrier-0106.labels /opt/openvino_toolkit/models/intel/vehicle-license-plate-detection-barrier-0106/FP32
```

* Check the parameter configuration in ros2_openvino_toolkit/sample/param/xxxx.yaml before lauching, make sure parameters such as model_path, label_path and input_path are set correctly. Please refer to the quick start document for [yaml configuration guidance](./yaml_configuration_guide.md) for detailed configuration guidance.
* Check the parameter configuration in ros2_openvino_toolkit/sample/param/xxxx.yaml before launching, make sure parameters such as model_path, label_path and input_path are set correctly. Please refer to the quick start document for [yaml configuration guide](./yaml_configuration_guide.md) for detailed configuration guidance.
* run face detection sample code input from StandardCamera.
```
ros2 launch openvino_node pipeline_people.launch.py
Expand All @@ -129,7 +129,7 @@ sudo cp ~/catkin_ws/src/ros2_openvino_toolkit/data/labels/object_detection/vehic
```

# More Information
* ROS2 OpenVINO discription writen in Chinese: https://mp.weixin.qq.com/s/BgG3RGauv5pmHzV_hkVAdw
* ROS2 OpenVINO description written in Chinese: https://mp.weixin.qq.com/s/BgG3RGauv5pmHzV_hkVAdw

###### *Any security issue should be reported using process at https://01.org/security*

Loading