Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'carla' #27

Open
wang123yuan opened this issue Apr 7, 2024 · 6 comments
Open

ModuleNotFoundError: No module named 'carla' #27

wang123yuan opened this issue Apr 7, 2024 · 6 comments

Comments

@wang123yuan
Copy link

wang123yuan commented Apr 7, 2024

Traceback (most recent call last): File "ExampleVLMAgentCloseLoop.py", line 225, in <module> model.start() File "/home/v2t/LimSim/simModel/Model.py", line 177, in start from sumo_integration.run_synchronization import getSynchronization File "/home/v2t/LimSim/sumo_integration/run_synchronization.py", line 39, in <module> import carla # pylint: disable=import-error ModuleNotFoundError: No module named 'carla'

When I run ExampleVLMAgentCloseLoop.py,What can I do if I get the above error?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 8, 2024

You should install carla package before you run this code.

@wang123yuan
Copy link
Author

When I run ExampleVLMAgentCloseLoop.py and ExampleLLMAgentCloseLoop.py, the screen can be displayed normally, but the displayed screen is very stuttering, and it is not as smooth as in your video.
Is this a PC performance issue or does the OpenAI account need to be upgraded, and is there anything I can do to make it a little smoother?

@wang123yuan
Copy link
Author

You should install carla package before you run this code.您应该在运行此代码之前安装 carla 包。
The screen is displayed normally, but it is very stuck. ...
屏幕截图 2024-04-08 224618

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 9, 2024

LLM inference is slow and the program waits for LLM making an answer before running, so lagging is normal.

@wang123yuan
Copy link
Author

LLM inference is slow and the program waits for LLM making an answer before running, so lagging is normal.

Ok, thanks for your reply. Does your demo video look smooth because it speeds up LLM inference?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 10, 2024

Yes, it is a replay video, you can run the ExampleReplay.py to view the result.

@zijinoier zijinoier changed the title Traceback (most recent call last): File "ExampleVLMAgentCloseLoop.py", line 225, in <module> model.start() File "/home/v2t/LimSim/simModel/Model.py", line 177, in start from sumo_integration.run_synchronization import getSynchronization File "/home/v2t/LimSim/sumo_integration/run_synchronization.py", line 39, in <module> import carla # pylint: disable=import-error ModuleNotFoundError: No module named 'carla' ModuleNotFoundError: No module named 'carla' Apr 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants