-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE #3682
Comments
Why do you need to import |
When i uninstall transformer_engine, it causes error.
|
Can you install I don't find any public library named |
If you installed vllm using pip, building from source could be a solution. |
|
Thank you for your advice. In my case, I didn't install vllm as a pip. (environment building at local) However, it was confirmed that it was a conflict of various libraries (e.g., pytorch, vllm, cuda). In conclusion, I downgraded the vllm version to build the environment (0.3.3 to 0.2.2) and solved it! |
I have updated the build from source section in installation document: https://docs.vllm.ai/en/latest/getting_started/installation.html#build-from-source The most important thing: install cudatoolkit, start in a fresh new conda environment. Then all you need would be just You can have a try with the latest code or the newly released 0.4.0 version. Hope it helps. |
I installed vllm and proceeded with import vllm, but the error occurred as above. i dont know how to solve this problem.
installation environment is as follows.
Please help out if anyone solved this issue.
The text was updated successfully, but these errors were encountered: