Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE #3682

Closed
ssuncheol opened this issue Mar 28, 2024 · 7 comments
Labels
documentation Improvements or additions to documentation

Comments

@ssuncheol
Copy link

ssuncheol commented Mar 28, 2024

I installed vllm and proceeded with import vllm, but the error occurred as above. i dont know how to solve this problem.

installation environment is as follows.

vllm : 0.3.3
cuda : 12.1
python : 3.10.6
torch : 2.1.2
transformers : 4.38.2
transformer-engine : 0.10.0
accelerate : 0.23.0
xformers : 0.0.23.post1

Please help out if anyone solved this issue.

@ssuncheol ssuncheol added the documentation Improvements or additions to documentation label Mar 28, 2024
@youkaichao
Copy link
Member

Why do you need to import transformer_engine, I believe vllm does not use it.

@ssuncheol
Copy link
Author

Why do you need to import transformer_engine, I believe vllm does not use it.

When i uninstall transformer_engine, it causes error.

pip uninstall transformer_engine
ModuleNotFoundError : No module named 'vllm._C

@youkaichao
Copy link
Member

Can you install vllm in a fresh new environment? e.g. conda create -n myenv python=3.9 -y

I don't find any public library named transformer_engine . You might have a complicated environment.

@jaesuny
Copy link

jaesuny commented Mar 28, 2024

If you installed vllm using pip, building from source could be a solution.

#3630 (comment)

@pangpang-xuan
Copy link

Why do you need to import transformer_engine, I believe vllm does not use it.
ImportError: /home/pangpangxuan/anaconda3/envs/vllmenv/lib/python3.9/site-packages/vllm/_C.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops15to_dtype_layout4callERKNS_6TensorEN3c108optionalINS5_10ScalarTypeEEENS6_INS5_6LayoutEEENS6_INS5_6DeviceEEENS6_IbEEbbNS6_INS5_12MemoryFormatEEE
环境:
nvidia-cublas-cu11 11.11.3.6
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu11 11.8.87
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu11 11.8.89
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu11 8.7.0.84
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu11 10.9.0.58
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu11 10.3.0.86
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu11 11.4.1.48
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu11 11.7.5.86
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu11 2.19.3
nvidia-nccl-cu12 2.19.3
nvidia-nvjitlink-cu12 12.4.99
nvidia-nvtx-cu11 11.8.86
nvidia-nvtx-cu12 12.1.105
tokenizers 0.15.2
torch 2.2.2+cu118
torchaudio 2.2.2+cu118
torchvision 0.17.2+cu118
tqdm 4.66.2
transformers 4.39.2
triton 2.2.0
typing_extensions 4.10.0
urllib3 2.2.1
uvicorn 0.29.0
uvloop 0.19.0
vllm 0.2.4+cu118
xformers 0.0.25.post1+cu118
python=3.9
清华哥 我也遇到了相似的问题 这个怎么解决呢?

@ssuncheol
Copy link
Author

If you installed vllm using pip, building from source could be a solution.

#3630 (comment)

Thank you for your advice. In my case, I didn't install vllm as a pip. (environment building at local)

However, it was confirmed that it was a conflict of various libraries (e.g., pytorch, vllm, cuda).

In conclusion, I downgraded the vllm version to build the environment (0.3.3 to 0.2.2) and solved it!

@ssuncheol ssuncheol changed the title ImportError: /usr/local/lib/python3.10/dist-packages/transformer_engine_extensions.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE ImportError: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Mar 31, 2024
@ssuncheol ssuncheol changed the title ImportError: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE Mar 31, 2024
@youkaichao
Copy link
Member

I have updated the build from source section in installation document: https://docs.vllm.ai/en/latest/getting_started/installation.html#build-from-source

The most important thing: install cudatoolkit, start in a fresh new conda environment. Then all you need would be just pip install -e . or python setup.py develop .

You can have a try with the latest code or the newly released 0.4.0 version. Hope it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

4 participants