Skip to content

Invalid argument: unable to find backend library for backend 'tensorrtllm', try specifying runtime on the model configuration. #662

@ChristophHandschuh

Description

@ChristophHandschuh

System Info

used docker nvcr.io/nvidia/tritonserver:24.10-trtllm-python-py3 to build engines and start the server (tensorrtllm_backend v0.15.0)

Who can help?

@byshiue @schetlur-nv

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

/

Expected behavior

all Ready

actual behavior

after starting launch_triton_server.py
I encountered following issue:

UNAVAILABLE: Not found: unable to load shared library: /opt/tritonserver/backends/tensorrtllm/libtriton _tensorrtllm.so: undefined symbol: _ZNK12tensorrt_llm8executor8Response11getErrorMsgB5cxx11Ev

additional notes

/

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions