Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
f037275
Add Readme for vision results
yunyaoXYY Oct 17, 2022
7599543
Merge branch 'develop' of https://github.com/PaddlePaddle/FastDeploy …
yunyaoXYY Oct 17, 2022
9997fed
Add Readme for vision results
yunyaoXYY Oct 17, 2022
6fd8784
Add Readme for vision results
yunyaoXYY Oct 17, 2022
c34823f
Add Readme for vision results
yunyaoXYY Oct 17, 2022
ae11b80
Add Readme for vision results
yunyaoXYY Oct 17, 2022
cef3415
Add Readme for vision results
yunyaoXYY Oct 17, 2022
532f18f
Add Readme for vision results
yunyaoXYY Oct 17, 2022
302743c
Add Readme for vision results
yunyaoXYY Oct 17, 2022
b9968f6
Add Readme for vision results
yunyaoXYY Oct 17, 2022
5be415e
Add Readme for vision results
yunyaoXYY Oct 17, 2022
d04eaf9
Add comments to create API docs
yunyaoXYY Oct 18, 2022
9f3c59c
Merge branch 'develop' of https://github.com/PaddlePaddle/FastDeploy …
yunyaoXYY Oct 18, 2022
7283757
Improve OCR comments
yunyaoXYY Oct 18, 2022
bccb9e4
fix conflict
yunyaoXYY Oct 24, 2022
51ad562
fix conflict
yunyaoXYY Oct 24, 2022
aecbf00
Fix OCR Readme
yunyaoXYY Oct 24, 2022
4ea1e2a
Merge branch 'develop' of https://github.com/PaddlePaddle/FastDeploy …
yunyaoXYY Nov 9, 2022
10e6107
Fix PPOCR readme
yunyaoXYY Nov 9, 2022
94a4d8a
Fix PPOCR readme
yunyaoXYY Nov 9, 2022
6017c8a
fix conflict
yunyaoXYY Jan 31, 2023
9d904ce
fix conflict
yunyaoXYY Jan 31, 2023
4c9ea47
Improve ascend readme
yunyaoXYY Feb 2, 2023
e313f99
Improve ascend readme
yunyaoXYY Feb 2, 2023
d12b7c4
Improve ascend readme
yunyaoXYY Feb 2, 2023
18c4fa8
Improve ascend readme
yunyaoXYY Feb 2, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions docs/cn/build_and_install/huawei_ascend.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,5 +118,13 @@ FastDeploy现在已经集成FlyCV, 用户可以在支持的硬件平台上使用


## 六.昇腾部署Demo参考
- 华为昇腾NPU 上使用C++部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README.md)
- 华为昇腾NPU 上使用Python部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README.md)

| 模型系列 | C++ 部署示例 | Python 部署示例 |
| :-----------| :-------- | :--------------- |
| PaddleClas | [昇腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README_CN.md) |
| PaddleDetection | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/paddledetection/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/paddledetection/python/README_CN.md) |
| PaddleSeg | [昇腾NPU C++ 部署示例](../../../examples/vision/segmentation/paddleseg/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples//vision/segmentation/paddleseg/python/README_CN.md) |
| PaddleOCR | [昇腾NPU C++ 部署示例](../../../examples/vision/ocr/PP-OCRv3/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision//ocr/PP-OCRv3/python/README_CN.md) |
| Yolov5 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov5/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov5/python/README_CN.md) |
| Yolov6 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov6/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov6/python/README_CN.md) |
| Yolov7 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov7/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov7/python/README_CN.md) |
12 changes: 9 additions & 3 deletions docs/en/build_and_install/huawei_ascend.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,12 @@ In end-to-end model inference, the pre-processing and post-processing phases are


## Deployment demo reference
- Deploying PaddleClas Classification Model on Huawei Ascend NPU using C++ please refer to: [PaddleClas Huawei Ascend NPU C++ Deployment Example](../../../examples/vision/classification/paddleclas/cpp/README.md)

- Deploying PaddleClas classification model on Huawei Ascend NPU using Python please refer to: [PaddleClas Huawei Ascend NPU Python Deployment Example](../../../examples/vision/classification/paddleclas/python/README.md)
| Model | C++ Example | Python Example |
| :-----------| :-------- | :--------------- |
| PaddleClas | [Ascend NPU C++ Example](../../../examples/vision/classification/paddleclas/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/classification/paddleclas/python/README.md) |
| PaddleDetection | [Ascend NPU C++ Example](../../../examples/vision/detection/paddledetection/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/paddledetection/python/README.md) |
| PaddleSeg | [Ascend NPU C++ Example](../../../examples/vision/segmentation/paddleseg/cpp/README.md) | [Ascend NPU Python Example](../../../examples//vision/segmentation/paddleseg/python/README.md) |
| PaddleOCR | [Ascend NPU C++ Example](../../../examples/vision/ocr/PP-OCRv3/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision//ocr/PP-OCRv3/python/README.md) |
| Yolov5 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov5/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov5/python/README.md) |
| Yolov6 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov6/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov6/python/README.md) |
| Yolov7 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov7/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov7/python/README.md) |
16 changes: 10 additions & 6 deletions examples/vision/detection/paddledetection/cpp/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
English | [简体中文](README_CN.md)
# PaddleDetection C++ Deployment Example

This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.

Before deployment, two steps require confirmation

Expand All @@ -15,13 +15,13 @@ ppyoloe is taken as an example for inference deployment

mkdir build
cd build
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j

# Download the PPYOLOE model file and test images
# Download the PPYOLOE model file and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
Expand All @@ -33,12 +33,16 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
# TensorRT Inference on GPU
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
# Kunlunxin XPU Inference
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 3
# Huawei Ascend Inference
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 4
```

The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)

## PaddleDetection C++ Interface
## PaddleDetection C++ Interface

### Model Class

Expand All @@ -56,7 +60,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode

**Parameter**

> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path
> * **config_file**(str): • Configuration file path, which is the deployment yaml file exported by PaddleDetection
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
Expand All @@ -73,7 +77,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
> **Parameter**
>
> > * **im**: Input images in HWC or BGR format
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult

- [Model Description](../../)
- [Python Deployment](../python)
Expand Down
12 changes: 8 additions & 4 deletions examples/vision/detection/paddledetection/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ Before deployment, two steps require confirmation.
This directory provides examples that `infer_xxx.py` fast finishes the deployment of PPYOLOE/PicoDet models on CPU/GPU and GPU accelerated by TensorRT. The script is as follows

```bash
# Download deployment example code
# Download deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/paddledetection/python/

# Download the PPYOLOE model file and test images
# Download the PPYOLOE model file and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
tar xvf ppyoloe_crn_l_300e_coco.tgz
Expand All @@ -24,14 +24,18 @@ python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
# Kunlunxin XPU Inference
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device ascend
```

The visualized result after running is as follows
<div align="center">
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
</div>

## PaddleDetection Python Interface
## PaddleDetection Python Interface

```python
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
Expand All @@ -52,7 +56,7 @@ PaddleDetection model loading and initialization, among which model_file and par

**Parameter**

> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path
> * **config_file**(str): Inference configuration yaml file path
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
Expand Down
12 changes: 7 additions & 5 deletions examples/vision/detection/yolov5/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
```bash
mkdir build
cd build
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
# Download the official converted yolov5 Paddle model files and test images
# Download the official converted yolov5 Paddle model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
tar -xvf yolov5s_infer.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
Expand All @@ -31,11 +31,13 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
# KunlunXin XPU inference
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
# Huawei Ascend Inference
./infer_paddle_demo yolov5s_infer 000000014439.jpg 4
```

The above steps apply to the inference of Paddle models. If you want to conduct the inference of ONNX models, follow these steps:
```bash
# 1. Download the official converted yolov5 ONNX model files and test images
# 1. Download the official converted yolov5 ONNX model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg

Expand All @@ -53,7 +55,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)

## YOLOv5 C++ Interface
## YOLOv5 C++ Interface

### YOLOv5 Class

Expand All @@ -69,7 +71,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported

**Parameter**

> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
Expand Down
8 changes: 5 additions & 3 deletions examples/vision/detection/yolov5/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,19 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
# GPU inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
# TensorRT inference on GPU
# TensorRT inference on GPU
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
# KunlunXin XPU inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
```

The visualized result after running is as follows

<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">

## YOLOv5 Python Interface
## YOLOv5 Python Interface

```python
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
Expand All @@ -42,7 +44,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported

**Parameter**

> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
Expand Down
9 changes: 6 additions & 3 deletions examples/vision/detection/yolov6/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@ python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --d
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device gpu
# KunlunXin XPU inference
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device ascend

```
If you want to verify the inference of ONNX models, refer to the following command:
```bash
Expand All @@ -34,15 +37,15 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device cpu
# GPU inference
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu
# TensorRT inference on GPU
# TensorRT inference on GPU
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use_trt True
```

The visualized result after running is as follows

<img width="640" src="https://user-images.githubusercontent.com/67993288/184301725-390e4abb-db2b-482d-931d-469381322626.jpg">

## YOLOv6 Python Interface
## YOLOv6 Python Interface

```python
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
Expand All @@ -52,7 +55,7 @@ YOLOv6 model loading and initialization, among which model_file is the exported

**Parameter**

> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
Expand Down
14 changes: 8 additions & 6 deletions examples/vision/detection/yolov7/cpp/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
English | [简体中文](README_CN.md)
# YOLOv7 C++ Deployment Example

This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv7 on CPU/GPU and GPU accelerated by TensorRT.
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv7 on CPU/GPU and GPU accelerated by TensorRT.

Before deployment, two steps require confirmation

Expand All @@ -13,7 +13,7 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
```bash
mkdir build
cd build
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
Expand All @@ -29,10 +29,12 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 1
# KunlunXin XPU inference
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 2
# Huawei Ascend inference
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 3
```
If you want to verify the inference of ONNX models, refer to the following command:
```bash
# Download the official converted yolov7 ONNX model files and test images
# Download the official converted yolov7 ONNX model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg

Expand All @@ -52,7 +54,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)

## YOLOv7 C++ Interface
## YOLOv7 C++ Interface

### YOLOv7 Class

Expand All @@ -68,7 +70,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported

**Parameter**

> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
Expand All @@ -86,7 +88,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
> **Parameter**
>
> > * **im**: Input images in HWC or BGR format
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
> > * **conf_threshold**: Filtering threshold of detection box confidence
> > * **nms_iou_threshold**: iou threshold during NMS processing

Expand Down
Loading