-
Notifications
You must be signed in to change notification settings - Fork 132
Description
System Info
base docker: nvcr.io/nvidia/tritonserver:24.08-trtllm-python-py3
PyTorch version: 2.4.0+cu121
PyTorch CXX11 ABI: No
IPEX version: N/A
IPEX commit: N/A
Build type: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
IGC version: N/A
CMake version: N/A
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.119-19.0009.32-x86_64-with-glibc2.35
Is XPU available: N/A
DPCPP runtime: N/A
MKL version: N/A
GPU models and configuration onboard:
N/A
GPU models and configuration detected:
N/A
Driver version:
- intel_opencl: N/A
- level_zero: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 124
On-line CPU(s) list: 0-123
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8374C CPU @ 2.70GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 31
Socket(s): 2
Stepping: 6
BogoMIPS: 5387.29
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb ibrs_enhanced fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 wbnoinvd arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.9 MiB (62 instances)
L1i cache: 1.9 MiB (62 instances)
L2 cache: 77.5 MiB (62 instances)
L3 cache: 108 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-123
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip] click-option-group==0.5.6
[pip] exceptiongroup==1.2.2
[pip] mpi4py==3.1.5
[pip] numpy==1.26.4
[pip] nvidia-cuda-cupti-cu12==12.1.105
[pip] nvidia-nccl-cu12==2.20.5
[pip] optimum==1.21.4
[pip] torch==2.4.0
[pip] torchaudio==2.4.0
[pip] transformers==4.42.4
[pip] transformers-stream-generator==0.0.5
Who can help?
No response
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
tritonserver load tensorrt-llm backend model failed. It seems that something wrong with env happened .
Expected behavior
..
actual behavior
E1014 08:21:29.943859 984 model_repository_manager.cc:703] "Invalid argument: ensemble 'ensemble' depends on 'tensorrt_llm' which has no loaded version. Model 'tensorrt_llm' loading failed with error: version 1 is at UNAVAILABLE state: Not found: unable to load shared library: /opt/tritonserver/backends/tensorrtllm/libtriton_tensorrtllm_common.so: undefined symbol: _ZNK12tensorrt_llm8executor8Response11getErrorMsgB5cxx11Ev;"
additional notes
tensorrt_llm=0.12.0