Skip to content

Commit

Permalink
Merge pull request PaddlePaddle#99 from jiweibo/model_update
Browse files Browse the repository at this point in the history
update cuda_linux_demo's 1.8 model to 2.0
  • Loading branch information
jiweibo committed Mar 8, 2021
2 parents b1b7448 + 49f4dbf commit acd621d
Show file tree
Hide file tree
Showing 6 changed files with 88 additions and 229 deletions.
192 changes: 0 additions & 192 deletions c++/cuda_linux_demo/CMakeLists.txt

This file was deleted.

35 changes: 25 additions & 10 deletions c++/cuda_linux_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,11 @@

使用Paddle训练结束后,得到预测模型,可以用于预测部署。

本示例准备了mobilenet_v1预测模型,可以从[链接](https://paddle-inference-dist.cdn.bcebos.com/PaddleInference/mobilenetv1_fp32.tar.gz)下载,或者wget下载。
本示例准备了mobilenet_v1预测模型,可以从[链接](https://paddle-inference-dist.bj.bcebos.com/Paddle-Inference-Demo/mobilenetv1.tgz)下载,或者wget下载。

```
wget https://paddle-inference-dist.cdn.bcebos.com/PaddleInference/mobilenetv1_fp32.tar.gz
wget https://paddle-inference-dist.bj.bcebos.com/Paddle-Inference-Demo/mobilenetv1.tgz
tar xzf mobilenetv1.tgz
```

1.3 包含头文件
Expand Down Expand Up @@ -86,20 +87,34 @@ output_t->CopyToCpu(out_data.data());
2.1 编译示例

文件`model_test.cc` 为预测的样例程序(程序中的输入为固定值,如果您有opencv或其他方式进行数据读取的需求,需要对程序进行一定的修改)。
文件`CMakeLists.txt` 为编译构建文件。
脚本`run_impl.sh` 包含了第三方库、预编译库的信息配置
脚本`compile.sh` 包含了第三方库、预编译库的信息配置。
脚本`run.sh`为一键运行脚本

打开 `run_impl.sh` 文件,设置 LIB_DIR 为准备的预测库路径,比如 `LIB_DIR=/work/Paddle/build/paddle_inference_install_dir`
打开`compile.sh`,我们对以下的几处信息进行修改:

运行 `sh run_impl.sh`, 会在目录下产生build目录。
```shell
# 根据预编译库中的version.txt信息判断是否将以下三个标记打开
WITH_MKL=ON
WITH_GPU=ON
USE_TENSORRT=ON

# 配置预测库的根目录
LIB_DIR=${work_path}/../lib/paddle_inference

# 如果上述的WITH_GPU 或 USE_TENSORRT设为ON,请设置对应的CUDA, CUDNN, TENSORRT的路径。
CUDNN_LIB=/usr/lib/x86_64-linux-gnu/
CUDA_LIB=/usr/local/cuda/lib64
TENSORRT_ROOT=/usr/local/TensorRT-7.0.0.11
```

2.2 运行示例
运行 `bash compile.sh`, 会在目录下产生build目录。

进入build目录,运行样例
2.2 运行示例

```shell
cd build
./model_test --model_dir=mobilenetv1_fp32_dir
bash run.sh
#
./build/model_test --model_file mobilenetv1/inference.pdmodel --params_file mobilenetv1/inference.pdiparams
```

运行结束后,程序会将模型结果打印到屏幕,说明运行成功。
45 changes: 45 additions & 0 deletions c++/cuda_linux_demo/compile.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#!/bin/bash
set +x
set -e

work_path=$(dirname $(readlink -f $0))

# 1. check paddle_inference exists
if [ ! -d "${work_path}/../lib/paddle_inference" ]; then
echo "Please download paddle_inference lib and move it in Paddle-Inference-Demo/lib"
exit 1
fi

# 2. check CMakeLists exists
if [ ! -f "${work_path}/CMakeLists.txt" ]; then
cp -a "${work_path}/../lib/CMakeLists.txt" "${work_path}/"
fi

# 3. compile
mkdir -p build
cd build
rm -rf *

# same with the model_test.cc
DEMO_NAME=model_test

WITH_MKL=ON
WITH_GPU=ON
USE_TENSORRT=ON

LIB_DIR=${work_path}/../lib/paddle_inference
CUDNN_LIB=/usr/lib/x86_64-linux-gnu/
CUDA_LIB=/usr/local/cuda/lib64
TENSORRT_ROOT=/usr/local/TensorRT-7.0.0.11

cmake .. -DPADDLE_LIB=${LIB_DIR} \
-DWITH_MKL=${WITH_MKL} \
-DDEMO_NAME=${DEMO_NAME} \
-DWITH_GPU=${WITH_GPU} \
-DWITH_STATIC_LIB=OFF \
-DUSE_TENSORRT=${USE_TENSORRT} \
-DCUDNN_LIB=${CUDNN_LIB} \
-DCUDA_LIB=${CUDA_LIB} \
-DTENSORRT_ROOT=${TENSORRT_ROOT}

make -j
17 changes: 17 additions & 0 deletions c++/cuda_linux_demo/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/bin/bash
set +x
set -e

work_path=$(dirname $(readlink -f $0))

# 1. compile
bash ${work_path}/compile.sh

# 2. download model
if [ ! -d mobilenetv1 ]; then
wget https://paddle-inference-dist.bj.bcebos.com/Paddle-Inference-Demo/mobilenetv1.tgz
tar xzf mobilenetv1.tgz
fi

# 3. run
./build/model_test --model_file mobilenetv1/inference.pdmodel --params_file mobilenetv1/inference.pdiparams
26 changes: 0 additions & 26 deletions c++/cuda_linux_demo/run_impl.sh

This file was deleted.

2 changes: 1 addition & 1 deletion c++/run_demo.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ do
done

# tmp support demos
test_demos=(yolov3 LIC2020 resnet50 test/shrink_memory)
test_demos=(yolov3 LIC2020 resnet50 test/shrink_memory cuda_linux_demo)

for demo in ${test_demos[@]};
do
Expand Down

0 comments on commit acd621d

Please sign in to comment.