Skip to content

Latest commit

 

History

History
59 lines (43 loc) · 9.86 KB

Latest_Packages_CN.md

File metadata and controls

59 lines (43 loc) · 9.86 KB

Latest Wheel Packages

Paddle-Serving-Server (x86 CPU/GPU)

develop whl develop bin stable whl stable bin
cpu-avx-mkl paddle_serving_server-0.0.0-py3-none-any.whl serving-cpu-avx-mkl-0.0.0.tar.gz paddle_serving_server-0.7.0-py3-none-any.whl serving-cpu-avx-mkl-0.7.0.tar.gz
cpu-avx-openblas paddle_serving_server-0.0.0-py3-none-any.whl serving-cpu-avx-openblas-0.0.0.tar.gz paddle_serving_server-0.7.0-py3-none-any.whl serving-cpu-avx-openblas-0.7.0.tar.gz
cpu-noavx-openblas paddle_serving_server-0.0.0-py3-none-any.whl serving-cpu-noavx-openblas-0.0.0.tar.gz paddle_serving_server-0.7.0-py3-none-any.whl serving-cpu-noavx-openblas-0.7.0.tar.gz
cuda10.1-cudnn7-TensorRT6 paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl serving-gpu-101-0.0.0.tar.gz paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl serving-gpu-101-0.7.0.tar.gz
cuda10.2-cudnn7-TensorRT6 paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl serving-gpu-102-0.0.0.tar.gz paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl serving-gpu-102-0.7.0.tar.gz
cuda10.2-cudnn8-TensorRT7 paddle_serving_server_gpu-0.0.0.post1028-py3-none-any.whl serving-gpu-1028-0.0.0.tar.gz paddle_serving_server_gpu-0.7.0.post1028-py3-none-any.whl serving-gpu-1028-0.7.0.tar.gz
cuda11.2-cudnn8-TensorRT8 paddle_serving_server_gpu-0.0.0.post112-py3-none-any.whl serving-gpu-112-0.0.0.tar.gz paddle_serving_server_gpu-0.7.0.post112-py3-none-any.whl serving-gpu-112-0.7.0.tar.gz

Binary Package

for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.

How to setup SERVING_BIN offline?

  • download the serving server whl package and bin package, and make sure they are for the same environment
  • download the serving client whl and serving app whl, pay attention to the Python version.
  • pip install the serving and tar xf the binary package, then export SERVING_BIN=$PWD/serving-gpu-cuda11-0.0.0/serving (take Cuda 11 as the example)

paddle-serving-client

develop whl stable whl
Python3.6 paddle_serving_client-0.0.0-cp36-none-any.whl paddle_serving_client-0.7.0-cp36-none-any.whl
Python3.7 paddle_serving_client-0.0.0-cp37-none-any.whl paddle_serving_client-0.7.0-cp37-none-any.whl
Python3.8 paddle_serving_client-0.0.0-cp38-none-any.whl paddle_serving_client-0.7.0-cp38-none-any.whl

paddle-serving-app

develop whl stable whl
Python3 paddle_serving_app-0.0.0-py3-none-any.whl paddle_serving_app-0.7.0-py3-none-any.whl

Baidu Kunlun user

for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker DOCKER IMAGES We only support Python 3.6 for Kunlun Users.

Wheel Package Links

for arm kunlun user

https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_server_xpu-0.7.0.post2-cp36-cp36m-linux_aarch64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_client-0.7.0-cp36-cp36m-linux_aarch64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_app-0.7.0-cp36-cp36m-linux_aarch64.whl

for x86 kunlun user

https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_server_xpu-0.7.0.post2-cp36-cp36m-linux_x86_64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_client-0.7.0-cp36-cp36m-linux_x86_64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_app-0.7.0-cp36-cp36m-linux_x86_64.whl