Skip to content

cpu 服务部署 如何运行容器启动容器 #987

@cunjing56

Description

@cunjing56

温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度


环境

  • 【FastDeploy版本】: 说明具体的版本,如fastdeploy-linux-gpu-0.8.0
  • 【系统平台】: Windows x64(Windows10)
  • 【硬件】: cpu
  • 【编译语言】:Python(3.7或3.8等)

问题日志及出现问题的操作流程

PaddleDetection 服务化部署示例
https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/paddledetection/serving
仅介绍了gpu的运行和启动容器,cpu咋整?

(paddlecpu) D:\pkg\FastDeploy\examples\vision\detection\paddledetection\serving>docker run -it --net=host --name fd_serving --shm-size="1g" -v /:/serving 928d09fd7108

=============================
== Triton Inference Server ==

NVIDIA Release 21.10 (build )

Copyright (c) 2018-2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
find: File system loop detected; '/usr/bin/X11' is part of the same file system loop as '/usr/bin'.

WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use Docker with NVIDIA Container Toolkit to start this container; see
https://github.com/NVIDIA/nvidia-docker.

root@docker-desktop:/opt/tritonserver# fastdeployserver --model-repository=/serving/models
I1227 08:52:41.713216 17 tritonserver.cc:1920]
+----------------------------------+----------------------------------------------------------------------+
| Option | Value |
+----------------------------------+----------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.15.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dep |
| | endents) schedule_policy model_configuration system_shared_memory cu |
| | da_shared_memory binary_tensor_data statistics |
| model_repository_path[0] | /serving/models |
| model_control_mode | MODE_NONE |
| strict_model_config | 1 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| response_cache_byte_size | 0 |
| min_supported_compute_capability | 0.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
+----------------------------------+----------------------------------------------------------------------+

I1227 08:52:41.713285 17 server.cc:249] No server context available. Exiting immediately.
error: creating server: Internal - failed to stat file /serving/models
root@docker-desktop:/opt/tritonserver#

另请问:客户端请求是需要在conda新建一个python环境, 直接安装python -m pip install tritonclient[all]就可以吗?还是需要安装paddle, fastdeploy,各种?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions